More Content - Including Podcasts

Thursday, May 3, 2012

#BCNET_HPCS Issues and Challenges in Engaging in Research with Private and Proprietary Data

Stephen Neville, UVic
Rozita Dara, IPC-ON
Patricia Brantingham, SFU
Caitlin Hertzman, Population Data BC

This session was intended as four case study presentations of how to manage the security and availability of big data in research.

The claim from Stephen is that the responsibility fundamentally falls on the researcher to understand the issues and ensure that the data and results are handled correctly.

Stephen outlined several existing scenarios including single computers or drives with encryption, external service providers or "cloud" solutions, private clouds, and the pluses and minuses of all of these options. All of these are very common to us who work in these research environments and there were no surprises here.

EmuLab in Utah was cited as a HPC facility that VLANs out the various research computing inside a secure data centre. Compute Canada is waiting on response to a proposal they have submitted to build a like facility.

When asked how to ensure that data destruction is complete at the end of a research project, Stephen said he takes complete responsibility, buys self encrypting drives on servers, and does everything himself, and that's the solution. This approach was challenged from the University management and privacy responsibility perspective. I agree that Stephen's approach is at best a stop-gap measure at best, and is not efficient, institutionally accountable, or scalable.

Rozita spoke to us about virtual tool use to protect information freedom. The relationship between the amount of data becoming available, the value of that data, and the legislation and controls available to govern the use of that data was posed to us as an increasing challenge.

The challenges Rozita summarised as:
Data overload
Unauthorised use
Over-regulation of data
Privacy in context (one ring will not rule them all)

We are suggested to check out the site http://PrivacyByDesign.ca to see the summary of her research on these challenges, and her proposed solution, "SmartData" which is effectively tagging data with metadata and building a parallel architecture to manage this. That is how I understood what she proposed. A SmartData symposium will be held in Toronto for those interested.

Population Data BC is a clearing house for health, demographic, occupational, environmental, and educational data from various public bodies. They work out of UVic, SFU, and UBC to provide the data and training on how to use it. They do not conduct research themselves, but provide the data to researchers.

Three models of privacy are suggested, enterprise risk management, information governance, and privacy by design. A best practice is to understand each of these, and apply the aspects that best suit your organisation and the data you collect, store, and use.

Best practices that are used by PopData BC are:
Physical zoning with fobbed access and alarms
Video surveillance
Fortification of walls
Sign in and escort for visitors
Network zoning with two factor authentication
Dummy terminals (physically different computers for working with secure data than using for general administrative work)
Separation of identifiers from content
Proactive linkage (data anonymization)
Auditing, logging, monitoring
Secure research environment (a VPN for researchers to access data pools)
Encryption (full data lifecycle protection)
Data destruction methods
External auditing
Data access request formats
Agreements
Privacy policy, incident response plan
Privacy training (and testing after)
Criminal records check
Close working relationship with OCIO et al

Patricia gave us illustrations of the criminology studies, data, and data representations used at SFU. Plans are for a provincial data store of criminology relevant data, and a Commonwealth internetwork to share research.

Lorenzo from SFU gave us a vague but useful explanation of the complexities involved in building the 5 layer secure environment for housing this research data.


- Posted using BlogPress from my iPad

Location:W Hastings St,Vancouver,Canada

Building a "Zero Energy" Data Centre

Firstly, as opposed to zero energy, we're talking about a movement towards a zero net emissions HPC data centre.

The Energy Efficient HPC Working Group focuses on driving energy conservation measures and energy efficiency in HPC data centre design, the group is open to all parties, and can be found online at http://eehpcwg.lbl.gov.

There are three subcommittees:
Infrastructure committee working on liquid cooling guidelines, metrics (ERE, Total PUE), and energy efficiency dashboards.
The system team is working on workload based energy efficiency metrics, and system measurement, monitoring, and management.
The conferences team puts on a monthly webinar every second Tuesday, and is primarily focussed on awareness.

A pay-for membership group related is The Green Grid, who for a $400 annual fee provide access to top resources to learn and apply.

75% of the top 500 super-computing facilities are in US, China, and Japan. The top system (in Japan) uses 12.66 MWatts, the average of the top 10 4.56 MWatts, with an average efficiency of 464 Mflops/watt

The WestGrid HP compute system at UBC is the 189th most powerful supercomputer in the world, but the 398th most efficient, this is a derived number as only half the systems in Canada have submitted their numbers to the EE HPC WG.

There are three tiers of power measurement quality:
1. Sampling rate; more measurements/higher quality
2. Completeness of what is being measured; more of the systems translates to higher quality
3. Common rules must be followed for start/stop times.

The EE HPC WG has a beta methodology that is being tested in Quebec.

Energy use in US datacentres doubled between 2000 to 2006 from 30 Billion KWatts per year to 60. Awareness, efficiency, and the economic downturn affected that trend and in 2011 the growth since 2006 was calculated to have slowed to 36%.

PUE = total energy divided by IT energy
This is equivalent to cooling, power distribution, misc, plus IT divided by just the IT energy consumption.

PUE average is 1.91 according to EPA Energy Star. Intel has a data center operating at 1.41, and the Leibniz Supercomputing Centre is predicted to operate at 1.15.

PUE does not allow for energy re-use, but the ERE does. ERE is the same as PUE, except it minuses reused energy before dividing by IT.

HP discusses next steps in re-thinking servers and datacentre designs. We're told the median PUE in 2009 for DC's was over 2, and today, efficient systems and the use of chillers and water cooling can get you to about 1.3. The FreeAir/EcoPOD methodology can get you to 1.1, theoretically.

The lowest a PUE calculation can get to is 1, so we're challenged to look at efficiencies in the 1, to pay attention to the numerator and denominator of the fraction. CPU (48%) and DRAM (18%) are the biggest energy/heat pigs in HPC systems. HP now tells us about Project Moonshot, which features a workload tuneable compute to I/O ratio, leveraging the cost structures of commodity processors. The reference is made to how ARM processors operate, and how this methodology is applicable to efficient processing.

Water has ~50 times the efficiency in removing heat that air does; making the argument that using air to cool our systems is significantly less efficient than water. Compare liquid cooled to air cooled engines; air cooled is much simpler, but highly inefficient.

Liquid cooling has been around for ages, but has not been attractive from a cost perspective. Power costs are rising, and liquid cooling options are now becoming available and commodity (almost). The argument from HP is that this is what we will using in our data centres in the immediate future, although nothing is readily commercially available to the higher ed market space.

Last year, ASHRAE issued a whitepaper on liquid cooling guidelines. This includes standard architectures and metrics that must be measurable to achieve success in liquid cooling for your data centres. The specs are rated in 5 levels of increasing efficiency.

Large improvements have been made in the past ten years in energy efficiency, and the focus will now turn to total sustainability. This includes now looking at metrics for carbon footprint, water usage, and energy consumption.
A key consideration needs to be location of the data centre, which looks at temperature and humidity of the locale. Based on all these factors, Canada is actually the best location for efficient data centres in North America. Yet all the new data centres are being built in locales in the US where either land or power is cheap, but the over-all efficiency is poor.

An example of a great pilot site is the cow manure powered data centre outside of Calgary AB. The discussion moved to total carbon footprint, and the GreenPeace "So-Coal Network" video is shared as an example of two things; the poor decision making around coal powered data centres, but also the pressure that can and should be put on North American (and global) organisations to make the right decisions.

The challenge is that it is short-term profitable to pollute. We're posed with the idea of using our campus data centres as carbon offsetting tool. By heating buildings with the heat output from the chilling water from the computer cooling we can claim not only reduced natural gas use, but carbon credit compensations in a cap and trade situation to reduce operating costs.

Lake Mead's loss of water, and potential complete loss by 2021 is cited as part of the challenge for us to think hard about evaporative cooling solutions. Stay tuned for the site www.top50DC.org which will create a playing field and world stage for accountability in truly green computing and data centres.

- Posted using BlogPress from my iPad

Location:W Hastings St,Vancouver,Canada

Next Steps for Canada's HPC Platform

Jill Kowalchuck, Interim Executive Director, Compute Canada

Compute Canada hosts 1,273 researchers and PIs, 2,379 grad students, and been the infrastructure to provide research in support of 3,500 publications. The services Compute Canada has built for $29M would cost the research community $50M to access without a centralised subsidised service.160k cores, 15 PB disk, at 75% - 90% utilisation makes up the Compute Canada infrastructure deployed across the country.

Marshall Zhang, a medical researcher at 16 years old has identified a new drug cocktail that could treat cystic fibrosis using Compute Canada resources under the direction of a PI in Ontario.

Expansion is planned into secured data centres and expanded storage to enable the medical/biomedical, criminology, and other research using sensitive sensitive industrial data. The information security program is currently under revision in support of these initiatives, to ensure compliance federally and provincially.

The three current priorities are:
Governance
Cost benefit analysis of data centres
CEO search

Compute Canada officially launched their new website today at http://www.computecanada.ca

HPCS 2013 will be hosted in Ottawa, dates and location TBD.


- Posted using BlogPress from my iPad

Location:W Hastings St,Vancouver,Canada

Wednesday, May 2, 2012

Mobility & Security on Campus

Panel from BCNET Application Advisory Committee
Phil Chatterton, UBC
Paul Stokes, UVic
Leo de Sousa, BCIT
Hugh Burley, TRU

UBC figures about 180,000+ mobile devices on campus
70% IOS, 20% Android, 10% other
There is a shift underway, but this is current state

A mobile web first approach is in place as of this year. Kurogo Mobile Platform from MIT & Harvard is in use, and iOS and Android apps are in development campus-wide. A campus wide encryption program has launched (WDE), and an examination of mobile use and security program is ongoing.

UVic claims to be far less maturer than UBC when it comes to ability to serve the demands of a mobile hungry user community. Faculty, staff, and students each have different needs, and need to be supported and managed differently.

66% of mobile devices in use at UVic are iOS based. The focus is intended to be on teaching and learning when it comes to UVic's IT planning. A focus on privacy and security is also essential.

BCIT has had a more administrative focus on BYOD. They noticed a significant uptake in iOS and tablet use from an employee point of view as of this past Christmas. Employees wanting a work-life blend will want to use the tools they are comfortable with and thus the consumerisation of systems at BCIT for staff.

Heavy use of Citrix to deliver applications has been a stronger focus than managing mobile platforms or delivering virtual desktops. Hosting virtual desktops will be a focus for next year, as will network access control in a controlled but not closed methodology; systems will always get at least Internet access.

A vulnerability has made its presence known at BCIT - hole 196 allows a man-in-the-middle attack by an internal user; the advice from Leo is to run HTTPS on all your servers.

Hugh, from TRU, states part of their success comes from a centralised IT group, and the most important achievement is in establishing standards and governance. This has led to an understanding of what they are trying to protect, and why.

A mobile device management server architecture is in place at TRU for iOS/Android and Blackberry. This helps address the issues with mobile devices, which are best understood when we understand how we use the devices and why.

TRU is seeing an exponential growth of mobile devices used to access campus electronic services for administration and learning, as well as the fact that people have multiple devices they want connected wirelessly.

When asked if we are being these technologies are pushed, pulled, or dragged from campus IT services groups, the consensus is a mixture of all three; varying at each campus.

Leo posited that access to resources via mobile and consumer devices should be determined by the security and privacy requirements of the service you are trying to access. This is how we address the challenge of how do we use NAC, mobile device management, and application delivery to ensure that we support the delivery of education and research.

I asked if researcher users are being considered to fit into a similar approach to academic and administrative, and the panel agreed that they are unique in many factors, and that the basic principle of educate before enforce is vital with that community.

- Posted using BlogPress from my iPad

Location:W Hastings St,Vancouver,Canada

BC's Freedom of Information & Privacy Act - Implications for Higher Ed

Paul Hancock, UBC
Bill Trott, UVic
Craig Neelands, SFU

All three are members of the BCNET security working group.

Paul provided a privacy primer for everyone. A quick history of how we've gotten to where we are today in Canada, and the paradigm of informational privacy, and the distinction between security and privacy. Security is about protection from threats, privacy is related, but different. Privacy is more about what you can and cannot do with information.

We are subject to FIPPA, one if Canada's most stringent rule sets governing the collection, storage, protection, retention, use, and disclosure of information. Significant implications around storage come from the requirement that information can only be stored in Canada.

Failure to comply affects us not only financially, but reputationally.

Paul shifted to discuss privacy impacts of cloud computing. After defining cloud computing, Paul reminded us of the constant impetus we have in Higher Ed to move to cloud based services. The primary implications are foreign storage, and access issues such as the US Patriot Act. Consent may be a loophole allowing this, but it isn't bullet-proof. Even encrypting the data does not make this acceptable in the eyes of the law.

Security, retention, jurisdiction, all pose challenges - in fact roadblocks - to moving services like email to a cloud solution with foreign data storage or movement.

Recent developments in the act may provide interesting options as the Minister is apparently being given powers to waive compliance.

The privacy impact assessment topic was next covered by Bill. When a breach is noted, the first two questions will be "was it encrypted" and "is there a privacy impact assessment"?

A PIA is not so much a 19 page form as it is a process. A PIA is a compliance tool, a risk assessment and mitigation tool, a decision making tool for the executive, and most importantly, an educational tool.

Section 69.5.3 in the recently revised FIPPA clarifies that we have a responsibility to conduct PIAs, and while its not clear that it is mandatory, we should be erring on the side of caution. Several situations were brought up by Bill where we need to be doing a PIA, and they all came across as common sense.

The root is to build trust inside and outside our organisation, show leadership in privacy, and have the best defence in the event of a breach.

Craig defined privacy breaches to us, and cited examples common in the higher ed sector. Craig showed that there is a difference between privacy and data breaches, so that we can focus responses to privacy breaches.

We should start with a framework for privacy breach responses; acceptable use policies that are in effect and understood, breach response processes and tools, and an understanding of when and how to notify the Office of Information & Privacy Commissioner. Many of these tools are available from the OIPC website.

SFU had 10 breaches last year, which has led to revisions to processes and tools, and made awareness of a need to account for the financial impacts.

The question came up as to whether Google Analytics was a challenge, and UVic noted that they've developed their own system to deal with that. It was noted that Google Analytics has the option to turn off collecting the last octet of IP Addresses, and that may or may not be a solution.

A great question came up asking if BCNET was in violation due to the pathway through Blaine WA that has data transmission through the US for traffic to and from UVic. The answer is that transmission does not legally equal access or storage, so at present we believe we are compliant. This discussion spun out further to an excellent debate with no solid answer.



- Posted using BlogPress from my iPad

Location:W Cordova St,Vancouver,Canada

BCNET's Advanced Network Projects & Shared Resources

Marilyn Hay
Andre Toonk
Scott Jamieson

Scott presented on Building a fibre network on the Saanich Peninsula from 540 Blanchard to UVic, 9.7 km.

Scott reviewed the key decisions that needed to be undertaken in the planning process, reviewed the complex permits process, and then shared some of the key challenges in the construction process.

The network management topic was brought up, starting with fibre inventories and ensuring that fibres through public venues are clearly labelled and recorded, to ensure clarity of ownership. Your fibre mgt system should also track splices, OTDR readings, and transport circuits.

The point made at the end was to always add as much spare capacity of fibre and ducting as you can.

Marilyn presented on the recent WDM implementation. A ring from UBC to BCIT to SFU was created using 4 strands and allowing a diverse path with cross connects. Enter the WDM solution.

An RFP was issued, requesting multiple wavelengths, 5 10 Gb to each site, scalability, and the lowest reasonable cost. A decision on DWDM vs. ROADM was needed, and when the results came back, a ROADM solution from ADVA was selected.

The ROADM solution with equipment at each site allows for changes to channels and wavelengths without interruption to pass throughs. The solution is scalable to add wavelengths for each site and can run as protected or unprotected circuits enabling high availability as needed.

All the sites will be brought up over the summer of 2012.

Andree shared with us network management tools at BCNET. The first challenge was to have a CMDB, currently a homegrown solution is in use. They are planning to OpenSource this solution, based on PHP, perl, Nagios plug ins, SNMP, & MySQL.

This system provides device management documenting all devices in the network, interface info, & statistics collection. Location, contact, and IP address information are also core functions of the tool.

The tool has been built with a view towards managing at the service level, by documenting the network services provisioned to clients, and provisioning information collected in customer seervice oriented views.

Fom an event management perspective, inspiration was taken from the Nagios framework to include key incident and reporting functions. Additional components, including one for change management are built in.

Different algorithms are available in the incident management component to allow customisation of the logic around alarm escalation and incident impact.

Andree.toonk@bc.net is interested in working with anyone who wishes to collaborate.

Check it out at Https://wiki.bc.net/atl-conf/display/bcnetcmdb

Marilyn updated us next on the Virtual Routing Service implemented via the Juniper devices in use at BCNET. This solution supports BGP, IPV4, IPV6, IPSec, & GRE tunnels. This solution allows clients to manage their own virtual routers. What this provisions for customers is an IaaS model, leveraging the BCNET staff & resources. This is a new service offering, with more information available from the site.

- Posted using BlogPress from my iPad

Location:W Cordova St,Vancouver,Canada

HP Discover 2012 - Why I'm Going

Good morning, and thanks for checking in again. Amidst a series of blogs I'm writing on the BCNET HPCS conference in Vancouver, I thought I'd throw in a bit of a diversion to give some insight on HP Discover 2012 in Las Vegas.

For the past few years I've been a member of the Vivit Worldwide board of directors, helping run the official and unbiased truly global user community for HP Software users. Long before that, I've been a local chapter leader in Vancouver, and have spoken at least three times at the conferences.

I've been going to HP Discover for many years now, through it's various iterations as OpenView Forum, HP Software Universe, and now HP Discover. I would have thought that I'd be burnt out by now, and a couple of years ago I was getting close to that until I got involved in the SoMe (social media) area. Attending a conference as an unofficial (hopefully one day more official!) blogger has been eye opening, and allowed me to view the conference with a different paradigm.

Instead of looking at sessions, booths, and plenaries thinking "what can I get out of this" I now look at abstracts, tracks, keynote speakers, and the floor show with the perspective of "what can I learn? What would be interesting to the colleagues back at the office?"

One of the key things I'm interested in this year, from the big picture perspective, is what Meg Whitman has planned for HP. With the rapid succession of CEOs in the past few years, I think everyone is holding off a bit to see what the software and hardware giant will undertake in 2012/13. On a more tactical scale, I'm planning to meet up with some of the HP technical crew to get a first-hand look at the new Operations Manager appliance. You can bet that I'll have some blog entries for Vivit on both of these topics, plus some additional input where I can.

If you are planning to be at the conference, come find me in the Community Lounge or Bloggers area on the show floor, I'm always happy to meet new people and learn your stories of why YOU are interested in HP Discover.


- Posted using BlogPress from my iPad

Location:W Cordova St,Vancouver,Canada

Tuesday, May 1, 2012

Impact of Consumerization, BYOD, & Social Media on Campus

The CIOs (or representatives there-of) formed a panel to discuss this topic.

Greg Conden, UNBC
Michael Thorsen, UBC
Stephen Lamb, BCIT
Paul Stokes, UVic
Jay Black, SFU
Brian MacKay, TRU

Each speaker in turn shared their perspective on how they are managing these challenges.

Some examples of technological projects related to these needs are: Identity based firewalling, application delivery platform agnostic, virtualised desktops, application virtualisation, social intranet, self-subscribed mass notification

Acknowledgement is made of a cultural shift we need to respond to, not keep our heads in the sand. We need to engage in the discussions around technologies we aren't comfortable with, but students live and function within.

How does the institution keep control over the data that the University is accountable for? FOIPPA in BC is pretty serious business, and the institution can be liable regardless of the actions of the individual.

Comparison made to a K-12 situation, recommendations in the Manitoba region:
IT must buy and manage all access points.
Move to IPV6
Understand the relationships of people to computers for function
Vendor agnostic, industry standard environment
Delineation between business network and public network, logically separate. Students have ONLY internet access.
Access control to the secure network; full NAC for business VLAN
Anyone connected to the secure network must agree to all T&C's for access.
Training and support plans
Acceptable use policy for users of public network (students)

TRU piloted giving nursing students iPads, and the use cases evolved beyond what was originally expected.

The question was raised if these technologies are really about teaching and learning. Stephen proposed thats a moot point as we must bridge the world of academia and the workplace of the future. That we should be facilitators and support the evolutions in pedagogy.

It is proposed that these are the most important changes we can support for the faculty and students for the next two years, without getting onboard, we alienate those we are there to build a teaching environment for.

Stephen suggested that the CIOs should not be afraid to engage in SoMe. Michael asserted that if you aren't going to participate, you should at least claim your space or it'll be claimed for you.




- Posted using BlogPress from my iPad

Location:W Cordova St,Vancouver,Canada

CANARIE's Next Mandate: The Way Forward

Speaker is Jim Ghadbane, CTO of CANARIE.

CANARIE just completed a new 2 year cycle at $40M. The funding will be used to improve the effectiveness of research in canada and accelerate growth of Canada's ICT industry. This year a new connection between Calgary and Edmonton will be lit up. Thunder Bay to Winnipeg is also being lit up.

CANARIE runs Canada's ultra-high-bandwidth research network, with primary investment from the Government of Canada.

The strategic objectives are:
Create a world leading collaboration network
Research platform infrastructure
Stimulate ICT innovation
Demonstrate operational excellence
Evolve funding models and reduce the risk and impact of funding cycle
Bridge the gap and lower barriers between research, education, and the private sector

Research traffic growth is projected for a tenfold increase in network bandwidth during the next two years, and the current network capacity will be exceeded by mid-2012/13. By 2017 the estimates are for 526,224 TB per annum. Research traffic has grown from 6.7PB to 46.1 PB from 2007 - 2011.

Commodity services used by CANARIE will be outsourced.



- Posted using BlogPress from my iPad

Location:W Cordova St,Vancouver,Canada

#BCNET_HPCS Shared IT Services for Research & Higher Ed

Mike Hrybyk, BCNET CEO discussed the expanding mandate for BCNET.

UBC, SFU, UVic, BCIT, UNBC, & TRU are the core members of BCNET, and these member institutions run the network as a consortium, a unique model in North America. The 24x7 NOC for BCNET is based at UBC.

The transit exchange in each major centre serviced is the interconnect point for public and private sectors to buy into the services of BCNET.

Having provided a background on Internet transit services at BCNET, Mike transitions us to what kinds of shared services BCNET is expanding its mandate to provision. Shared data centres, cloud computing, back up services, & video conferencing.

A proposal is currently with the BC government to run three provincial shared data centres. Storage procurement and data back up services are at $250/TB per year for back up, and the ability to leverage the BCNET deal on storage hardware.

A small test cluster using eucalyptus is currently in play to provision cloud computing infrastructure.

Bluejeans, a cloud based MCU is being leveraged to provide cloud-based VC services. Integrates with Skype, GoogleTalk, and HDLink. Mike indicates his feeling that this is a game changing solution and service offering.

BCNET partnered with the Canadian Access Federation and was the first organisation in Canada to join eduroam, and 36 National campuses, including 11 in BC offer wireless roaming.

BCNET is considering use of ServiceNow as a shared SD solution across all member campuses. This may be a bigger bite than is being presented, given our experience at UBC.

BCNET intends to provision integrated HDVC services, and video storage and streaming for academic and research purposes, to manage the challenges around intellectual property rights in an inter-institutional paradigm.

Shared network management tools, fibre optic asset management, and a unified client portal are on the roadmap for BCNET.

Future service areas summarised are:
Networks
Software
Service management
Video conferencing
Elastic computing
Storage and backup

- Posted using BlogPress from my iPad

Location:SFU Harbour Centre, Vancouver BC

#BCNET_HPCS Plenary

So it's the first session, the opening plenaries for this 12th annual 3 day conference. The focus is on higher ed shared network and computing services. We're pretty bought into this as we use BCNET to provision our provincial video conferencing system that facilitates the distributed medical program.

Jay Black, Chairman of the BCNET board welcomed us all, and noted that this is the first year that HPCS and BCNET have partnered on this event. Jay shared that the line up of speakers is chosen to ensure quality information for the research and IT attendees, and is being not only recorded for future playback, but webcast realtime.

A copy of backbone magazine was handed out to attendees, I'll be reading through that later, and provide my thoughts.

Hashtag for the event is #BCNET_HPCS.

Mike Hrybyk, CEO of BCNET & a founder of Canadian Internet services is up to welcome us and explain some of the background for the event. Mike explained the track breakouts, and how speakers are selected by working groups on the five different topic areas, ensuring that the content is focussed on information that attendees will find of value.

Jill Kowalchuk, Executive Director for Compute Canada was introduced next. Jill shared some information about the involvement and support of various vendors and institutions to ensure the success of the HPCS part of the event.

Last introduction was for Joe Thompson, Acting ADM, Ministry of Advanced Education. Joe shared some of the government's perspective on the importance of BCNET for education, research, and innovation in post secondary. Joe discussed recent conversations he has had with higher ed technology leadership and the increasing rate of change on the demand for services in academics, and the responsiveness we as tech leaders in higher ed must have to these demands. Joe announced a suite of online tools launched recently to support students across BC.

Stephen Wheat, GM of Intel's HPC business is the final introductory speaker before our keynote speaker, and shared some industry perspectives on the conference themes. Stephen proposed that HPC is on the verge of commoditisation, that a balance between task complexity and user accessibility is coming. The road to this state aligns with the connect, compute, and collaborate theme of this conference.

The keynote speaker Leonard Brody was introduced. Leonard's theme is "This Monumental Shift." leonard tells us his job is to look 3 to 5 years out to see trends, which is ironic since the last time I saw him speak, 2 months ago, he said there's no point looking forward more than 365 days. And then the lightbulb goes on for me as he launches into effectively the same presentation I saw from him 2 months ago at a leadership session.

Leonard states we are at a conjunction of four major changes in civilisation; economic, environmental, technological, and generational. To understand these changes and be prepared, we need to understand the historical context of how we got here, and understand the impacts and drivers of human behaviour. Humans are changing faster now then they ever did, and need a compass and roadmap for where we are going in the next year, and leadership that understands all these factors.

Leonard poses that we should consider why the Internet matters. Historically, we are referenced to the uniting of West and East of the US via railway, and how the US shifted in just over a decade to become a world economic leader.

All significant movements in media throughout history were restricted by cost and government intervention. Until the Internet. This is a paradigm shift that may preclude using the past to predict the next shifts in technology and sociology.

Sidney Crosby in 2010 is compared to Paul Henderson in 1972; 3.5 million status updates on Facebook in 30 minutes when Sid scored the "Golden Goal." The planet has moved to a level of interconnectivity unprecedented in rate of social and technological change. The point is that we must pay attention to cycles, natural and man made, as they are shrinking and will impact us and the world around us.

Our physical & virtual lives are at a confluence, and key markers that illustrate how we are changing are trust, relationships, memory, brain physiology/use, political governance, and the differentiation between the markers in our physical & virtual lives.

In the end, it was actually great that I had the opportunity to hear Leonard's presentation again, as I took different ideas from it. I'll also be looking to watch the movie "Waiting for Superman" Leonard recommends.

- Posted using BlogPress from my iPad

Location:SFU Harbour Centre, Vancouver