More Content - Including Podcasts

Wednesday, June 24, 2009

Current Networking Trends that Affect Network Management

Trends in Network Management are understandably driven by the trends in network architecture. Network architecture tends to be viewed as monolithic and unchanging; this is far from the truth. Networks tend to go through cyclical evolutions approximately every five years when the ever-increasing plethora of other network dependent technologies build up a critical mass and force change on the network. Like every other technology the IT manager must face, these changes must be adapted to and managed effectively because of the pressure they place on the networks that keep Information Technology’s life-blood flowing.


Convergence

Voice over IP (VoIP) has been in the workplace for some time now and most networks have already adopted, or planned for, this technology. Those which have not will need to in short order – if your shop won’t ever use VoIP as it exists today, other convergence requirements will be around the corner. Even if your shop is not using a VoIP technology today, it is foreseeable that the other business areas or the telco providers will provide sufficient momentum or incentive for this change to take place.

Support of VoIP solutions can include the requirement for Power Over Ethernet (POE), Virtual Local Area Networks (VLANs), and traffic prioritization. These technologies have in the recent past outstripped the ability of legacy NMS’ to properly monitor and manage them.

Network-based video conferencing and streaming audio for training and other business (and often non-business) related requirements may not require POE but demand traffic prioritization and VLAN capability on the network. Finding a corporate network solution in place today that does not support either VLAN or traffic prioritization is rare, but what about the NMS that monitors and reports upon these technologies?

The other factor to consider with respect to convergence and it’s impact on NMS choices is that convergence based technologies entering the workplace tend to be very high-profile in as far as the public image and business operation of the workplace are concerned. When the telephones don’t work, or the customer WebEx sessions fail to operate smoothly, customer perception of the organization is negatively impacted.

A NMS choice needs to be designed to support convergence technologies, or be a supported integration with a point solution from the convergence technology vendor (i.e.; have a proven “plug-in” capability with your Cisco IP telephony management toolset).


Mobility
An increasing demand is placed on today’s networks to support mobile computing solutions from laptops and Personal Digital Assistants (PDAs) to wireless VoIP devices. This is by no means an inclusive list, but clearly the expanded use of these types of end-user computing technologies is driving the increase in deployment of wireless networking technologies.

As more wireless Local Area Networks (WLANs) are deployed, a trend is occurring where many shops are looking seriously at the continued value in having multiple physical Ethernet drops for every person’s work area.

The increased dependence on WLANs for business critical functions is a change in networks that is driven in from the network edge, as opposed to outwards from the network core. Having the end-users bring more of these technologies into the workplace with the expectation that they will have access to the same business functionality that they have had from their hard-wired desktop systems is driving this requirement at a nearly exponential growth rate.

Deployment of wireless network technologies to keep pace with the demand can be a risky business, and your selection of NMS needs to be able to keep pace with these demands. If your organization is seeing the growth in wireless technologies ensure that you select an NMS that will have the scalability to add monitoring for the quantity of Wireless Access Points (WAPs) that will be deployed. This can be a significant additional number of “managed nodes.” You will also need to make some strategic decisions as to whether you will be monitoring the wireless devices attaching to the WAPs.

A further consideration when selecting a NMS for a wireless environment is the support for the control and management infrastructure used between the WAPs and the wired network. Often large wireless deployments will have centralized controllers that manage groups of WAPs. These architectures will also likely need enhanced ability to monitor security related aspects.


Security
A trend in networking is the ability to apply security controls at the network edge. This useful concept requires underpinning technology that needs to tie back into your NMS for control and audit purposes. Access to network ports is managed by intelligent edge switches that leverage RADIUS technology and tie it back into the directory systems; thus controlling who is authorized to connect physically or wirelessly to the network.
However you implement this, your NMS of choice needs to be aware of “bad connections” and forward those alarms to your network and/or security people. Perhaps there’s even automated controls you want to leverage for this, but regardless, you’ll need an NMS that is ready to work alongside these identity-driven networks.

When senior management (or worse, external auditors) come knocking asking for reports of network use how will you provide that information? Find out what kinds of audit/reporting requirements your organization may require of the network for privacy or other mandated legal compliance reasons, and use those as further criteria in your NMS selection.

The basic underpinning network management technology in use today is the Simple Network Management Protocol, or SNMP. SNMP versions 1 and 2c (the most frequently used versions) are infamously insecure. These protocols should never be used outside of the secure perimeter of your network, and even regarded dubiously for use inside. Most network gear you buy today support the use of SNMPv3, the secure encrypted version of SNMP. The challenge comes when evaluating the NMS, as many still do not support SNMPv3 out of the box. This is certainly something to check for.

If a part of your network is outsourced, and you still want or need to manage it, you will need to have an NMS that is capable of understanding proxy-based SNMP management, likely as well as SNMPv3. Many network outsourcing companies will not provide this proxied monitoring so you should be checking with your service provider before making this a NMS criterion.


Configuration

How is your organization dealing with issues of network device inventory, version control, and change management? Should your NMS be part of the solution or part of the problem?
Your NMS choice does not necessarily have to be part of a framework solution with a full Configuration Management Database (CMDB), but it should at least have significantly advanced polling and collection abilities to keep current on what is out in your network. As well, this data should at the very least be readily exportable to your CMDB choice of today or tomorrow. The polling intervals should be readily configurable, so that you can have different polling intervals for network nodes of different importance.

When considering polling, you should also learn about the polling technology that the NMS uses. Is it basic ICMP (ping) status for up-down? Or is it slightly more complex SNMP-based? As discussed in the security section, consider the versions of SNMP to be used. Additionally, try to understand what kind of polling engine the NMS uses and how it differentiates and adds/removes risk from the management of your network. This is the kind of area where an expert consultant in NMS comes in very handy.

Is it important to you to centrally manage the firmware and configuration of your network topology? There are many point-solution tools from the hardware vendors that provide this, as well as third-party application specifically designed for this functionality. You should determine whether your need are better suited by integrating this functionality into your NMS, or obtaining a NMS that provides this ability. Wanting your NMS to handle your complete configuration management needs will dramatically shorten the list of available products, so it is advised to focus more on the compatibility aspect and leverage the point solution for firmware and configuration management, while letting the NMS manage discovery & status.


Business Driven Requirements
Every decision made in IT is governed by or directly affected by business drivers and requirements. Various requirements for your NMS selection are going to be driven from what is currently happening with other business areas of your organization, or strategic initiatives.

We spoke early in the security discussion about regulatory compliance issues around having data collected and reported for audit, but other areas of consideration should be around mergers or acquisitions, planned growth, or outsourcing. All of these factors require a NMS that is scalable and quick to update its understanding of your changing network topology. It may also require that you have the ability to provide secured, limited access to the NMS for third-parties who have shared interests in the support and maintenance of the network.

Does the organization have any plans around Data Centre consolidation? This kind of activity will mean reduced core network nodes, but increased edge nodes, and an increased backhaul of network traffic. This again leads to scalability of the solution, speed and accuracy of the discovery and polling mechanisms, and the ability to extract the network inventory information readily.

Green IT initiatives may have some impact on your NMS selection as well. While power reduction strategies likely point towards data centre consolidation, they can have other unexpected outcomes for the network, like increased virtualization, possibly outsourcing of certain services, and often less printing means more electronic data movement and the ability to get large files quickly back and forth from the core to the edge on mobile devices.

Ensure that your NMS selection takes these kinds of items into consideration by its ability to provide management to the network edge (or beyond) with speed and accuracy, and a fast and accurate causal engine to help reduce the time spent diagnosing problems that affect the delivery of data to the other business users. They may not always be network problems, but can you back that up objectively and quickly when the VP is standing in your door?

Another area to consider is managing the network as a delivered service to your customers and the data collection, analysis, and reporting requirements for that. Service Delivery Management in the NMS is also rare but tends to be a feature available more commonly when you are using framework solutions. You can get to this point without a framework if you carefully consider how you will make the measurements of the Service Level Agreements and Service Level Objectives available to the customers of your network, both internal and external to the organization.

Lastly, the biggest impact that business driven requirements have on NMS selection is that of diminishing budgets and the requirement of doing more, or the same, with less. This can lead you to consider how to budget for your NMS & its ongoing support and maintenance, but also gives you the opportunity to consider making it an operational cost by leveraging some form of “Software as a Service.” Many vendors provide this solution, where for monthly or annual fees they will manage your NMS and provide the output you require from it by either hosting the NMS remotely (debatable due to security considerations) or implementing and maintaining the NMS on your site.

Thursday, June 18, 2009

HPSU 2009 - Part 6 - What is OMi?

OMi - What is it really?!
There's been a lot of confusion (on my part as well, I must confess) about what exactly OMi is, and how it works. On the plus side, I've just had a great chat with one of the developers & two of the solution architects to help me finally understand what it is, how it works, and why you might want it.

First things first - OMi is a name only appreciated by the marketing team because it has caused mass confusion - OMi is NOT the next version of OMW in the way that NNMi is for NNM. OMi is a "layer-on-top" tool that enhances your troubleshooting & root cause analysis capabilities for OMW/OMU (and OML?) and comes as part of the BAC suite. By purchasing & installing OMi you get BAC, and must install & operate the BAC server. So this loops back to those still running OVIS and wondering about what they'll need to do about moving off of it (topic for upcoming post - OVIS migration to BAC).
If you want to make use of OMi you need to do a couple of things...
1. Have OMW or OMU running with all requisite SPIs
2. Get your NNM up to NNMi (unless you don't have it, then skip this step)
3. Install BAC & OMi (kind of one step...)
4. In installing BAC & OMi you by default install uCMDB, which must now be configured
5. Run the automated configuration tool to pull the data from your OMW/U tool to populate the uCMDB
6. Start to configure your dashboards and views

Of course, I've grossly over-simplified, but it's a roadmap to get you started. As always - if you have specific questions you'd like me to answer or thoughts you'd like to share on any of this please reply to the postings!

HPSU 2009 - Part 5

The first session I attended Wednesday was Pete Zwetkof’s (HP NNMi Product Manager) “HP Network Node Manager i-Series upgrade tips & tricks.” Pete is a good solid speaker, who keeps a technical audience engaged.

Kevin Smith did this presentation online for Vivit previously, so you can certainly go online to find most of this information. If you are considering the upgrade of an existing NNM 7.x system to NNMi, it's well worth your time to gather as much info ahead of time as possible - this is not for the feint of heart.

Top advice out of this blog entry is to locate the 200+ page document that covers this topic in more detail at: http://support.openview.hp.com/selfsolve/manuals
The "NNM Deployment & Migration Guide" is the document to find. Written for those familiar with the “old” NNM it includes where things differ and the entire upgrade process.

It was also noted that the 8.12 release is due out on the 26th of June. 8.12 unlocks the ability to send events to OM agents as traps – this is not available in prior versions of NNMi.


The Key Steps to Take in executing the upgrade are:
• Gather & Transfer configuration data using packaged tools , then zip or tar the directory structure, move the archive, and uncompress in the new platform. Very high-level approach.
• Configure SNMP access
• Configure Discovery
• Import loaded MIBs – preserve as many MIBs as possible – does involve some manual tasks including the actions
• Import trap definitions
• Block/Ignore/Disable traps – in NNMi they aren’t really blocked, but just stopped from coming into the operational pipeline for processing
• Automatic actions in NNMi
• De-duplication, rate coorleation and pairwise matching of incidents
• Map hierarchy – can migrate containers, but not nodes because the filtering technology is completely different; you need to define filters for each migrated container

I won't go into gory detail on each of the steps, but that'll be another post for another time if the demand is there from my loyal blog readers - so if you want me to post more on this topic please let me know!
Pete crammed an unbelievable amount of useful info into a 45 minute presentation - I am impressed.

Wednesday, June 17, 2009

HPSU 2009 - Part 4

OK, I'm taking a change of my normal pace here and I'm going to be a bit brutal. I just attended a breakout session by Paul Salamone, Technical Architect with Lockheed Martin on BAC optimisation tips & tricks.

Clearly naming conventions are the key tip &/or trick Paul has to share with us. Paul’s not a very dynamic speaker, and out of the gate we were getting some pretty obvious information; I suppose it's good for anyone who hasn’t played with BAC before. A lot of discussion centred around object and BPM naming conventions for the first 10 minutes…

Tips like:

Adopt a naming scheme

Distinguish your objects from o-o-t-b objects

Use biz unit names in your object names

Use app identifier acronyms

BE CONSISTENT

Profile Related Objects

Add the full name for objects seen by users

Just the acronym for objects not seen by users

An interesting conversation spun out of Paul's comment to "Put a BPM in your data centre (as well as distributed) – use it as a baseline to compare WAN performance vs. LAN performance." Now while this is standard info provided in any BAC training it triggered an interesting discussion in the Q&A session at the end. Question came up to be hypothetical about having your BPMs only centralised and whether this would be effective, which is a moot point because centralising all your BPMs defeats the purpose of having the BPMs. The only reason I can see for this is saving money by having fewer BPMs. I give Paul full points for arguing against this principle as I agree whole-heartedly with him on this


More of Paul's advice is to use the first couple of weeks to set a baseline for your thresholds and adjust quarterly (I question this, because you should really measure against known business cycles – i.e.: retail, finance, manufacturing, etc.) Al said, it's still standard practice with any implementation.


On the topic of the dashboard Paul advised to restrict the dashboard to trained users – why? To keep users from panic if they don’t understand how the app works. Well, if you set your thresholds well & have a solid promotion and knowledge management supporting the deployment this shouldn’t be an issue.


If you want to use the Geographical Map set the BPM Source Adapter to Transaction/Location – RTFM.


Paul discussed the use of worst-child vs. percentage rule, and why one would be used versus the other – again this is beginner stuff because any experienced OMW/NNM person understands the difference and why it’s important to reduce false positives.

A useful tip from Paul was to use profile names as a way of hiding objects – placing (HIDDEN) or some other key word in the front of the name allows you to use filters to block those items from general public view.

Back to discussion about naming schemes, this time for scripts.

Two useful tips came up under the scripts discussion:

Add logic to script to fail all transactions if one fails

Use multiple service accounts IDs with long password expiration

Paul completed in 30 mins; once we got to Q&A, a question came up about SLM actual response time vs. % good transactions – Paul suggested that they use the actual response time for SLM instead of % good transactions. But didn’t explain why. Conversation moved to principles around how to organise SLAs.

Qty of servers was queried because two disparate environments built up & they are In the process of merging.

Asked about their mechanism for deploying BPMs, and the response is that they build them centrally & ship them out physically with a monitor as a desktop system. Ideally it should be more of an appliance build of a system. They allow the BPM systems to receive any updates/patches that any other workstations do. Again no explanation of why the decision was made. There’s certainly very strong arguments against using the methodology of treating ALL your BPMs as standard corporate desktops…


All in all, I was disapointed - I think the title of Paul's session had me expecting more of a deep-dive into technical gotchas, not advice on naming conventions.


Tuesday, June 16, 2009

HPSU 2009 - Information Request

Dear readers,

if there is something specific you'd like me to investigate on your behalf while I'm at the Software Universe and have all the major HP Software partners and product managers available to me, please let me know - you have until Thursday morning!

Enjoy the blog, and thanks for reading!
Jason.

HPSU 2009 - Part 3


BTO Mainstage - Robin Purohit

Discussion of Trends and Roadmap of Products


Robin took the BTO Mainstage to lead a discussion of trends and provide a roadmap products. First Robin discussed some of what he referred to as "Breakthrough Communities" which included

  • User communities - Vivit
  • Web 2.0 Collaboration
  • Customer Advisory Boards

The next topics were around hot tech trends and how those affect HP Software.

Virtualisation

~ 15 M virtual machines shipped last year - expected to double over the next 3 years. Robin noted that there are improvements in data protection & virtualisation from the HP perspective. I'll be getting more information fro the show floor tonight & tomorrow.

SaaS

18% of biz apps are expected to be deployed via SaaS (Software as a Service). HP in the last 6 months has made the latest Service Manager & Asset Centre available via SaaS for customers who don't want to run the applications in-house.

Web 2.0

Robin noted that there is a hot trend seeing the transition of rich internet technology from the consumer world to the enterprise. These Web 2.0 apps power rich visuals, but have a huge impact to QA, Security, and support/performance

aspects of managing your datacentre.


Robin also noted that there is what he called a "regulatory tsunami" coming - indicating that if we think there's been a lot of work around SOX etc., just wait to see what the current US & International administrations will be annoncing in the next year. HP is preparing to assist customers in this area by way of their Tower software acquisition, and a focus on PCI compliance with that branch of HP Software. This also segued to the topic of "cloud governance" and how to know that when you are buying IT services on the wire & paying for use you are protected. This is where the HP services groups "Cloud Assure" steps in making sure that what you buy will work.


As always the lecture to the audience came around to the topic of quantifying the business value of IT. HP is making efforts to move beyond cost and show revenue related to IT services that IT delivers. Not a new message, but HP has tooled up more applications in this area. The release of the HP IT Financial Management Portfolio includes Financial Planning & Analysis which shows cost trends over time by different lines of biz, and provides a breakdown of labour vs. capital. This product integrates with 3rd party ERP & financial systems for single point of glass analysis. Robin also mentioned the upcoming release of PPM 8.0. The new suite also includes enhanced Asset Management with a focus on software licence management; Robin stressed the well versed point that licence compliance is the key use & ROI area for this tool.


The discussion moved on to the Application Security Centre with the free 21 day trail of App Sec Centre on SaaS. Robin informed us of HP's opinion that 35% of flash apps violate Adobe best practices, which is where the SWFScan tool comes into play. Apparently the tool is aware of nearly 4000 flash applications, and has had 13000 downloads so far, driven by the fact that 35% of all webstes run flash of some variety.

Data Protector was the next topic, and HP claims to reduce TCO of backups by 50%. This tool currently has 30000 installs and HP claims market leading growth for a product that is the leading solition for virtual servers & storage. Marketing figures, nothing of substantial intrest to me there.

The presentation came around at last to Application Lifecycle Management, and Robin showed the HP vision that outlines somewhat like this:

Vision - Roadmap - Release - Cycle - Daily

CMS - PPM - Requirements - QC - Compliance

HP flogs the use of their "Agile Accelerator", with it's inclusion of "best practices" to speed adoption, and the app lifecycle is being completed by a focus on app retirement. Pretty interesting that last part, but vague still. The numbers quote for ROI purposes were $200-300k hardware & other savings to be seen when you consolidate but I suspect those are average numbers from fortune 500s.

Robin then discussed HP Operations Orchestrator, and HP Configuration Management. Apparently this week they will be publishing a set of best practices for configuration management - I will read & review those once they're available.

Next Robin brought up some brief guests:

Jonathon M Gregory, Partner Strategic Effetiveness (SITE) @ Accenture - claims 5-30% savings by implementing Accenture's IT financial mgt solution. Jonathon spoke very briefly and basically made a commercial for how Accenture is the key player in finding the single version of the truth for IT. If you want to know more find your local Accenture rep! :-)

Robin mentioned RIM and HP Operations Manager on the BES solution was briefly plugged - I will get more info from the show floor and share it.

The last speaker before we were subjected to a 45 minute commercial for how wonderful HP software is was virtualisation - how do we do it right in the data centre. Brian Byun, VP of Global Alliances for VMware spoke quickly about VMware Vsphere 4. In short, it creates a private cloud architecture and supports cost transparency models by leveraging vCentre to implement shared infrastructure capacity planning, and allows customers to federate the infrastructure & include resources from external suppliers.


Lastly, before Brian left, he & Robin made an announcement: VMware will be launching integrated BSM for physical and virtual systems, HP DDM will be integrated in future releases of the vCentre suite, and there will be integrated virtual and physical client desktop management.

HPSU 2009 - Part 2


I helped set up the Vivit booth with other local chapter leaders and the board of directors, grabbed a really quick (but heavy!) breakfast, then squeezed into the mainstage room for the keynote presentations.

They kicked things off with some brilliant animated clips outlining the current challenges faced by IT - including IT business alignment, the current "new" economy, virtualisation, and cloud computing amongst other things.

Jake Johanssen was our host for the morning. A stand-up comedian which is a really different take but made things much more entertaining than they've been before at 08:00 on a Tuesday morning. The typical jabs at Canadians were made, but Jake threw in some other topical humour that was quite engaging. He really got the crowd warmed up well. The room was definitely smaller than previous years, but it was filled. We're waiting to hear attendance numbers overall. Some interesting trivia about the impact of the economy is that 27% of conferences in Vegas were cancelled this year.

Andy Isherwood VP & GM of HP Software Services
Andy started off thanking for people to come given the economic restraints - a message that brings home where things are at globally an across the US. His discourse started with a focus on budgets being cut between 0 - 40%, and a lot of uncertainty in HP's customers. Andy asked the audience to consider the situation as an opportunity to be innovative. The HP opinion is to try and get ahead of the economic recovery curve by aligning with business, reducing costs, consolidation, and increased efficiency.
Three examples were provided of 3 organisations that have achieved a quick ROI:
  • JetBlue
73% decrease in testing costs, 80% reduction in post production failures, 3x increase in testing efficiency, 70% increase in test virtualisation.
  • Altec
10% app downtime reduction, 20% faster response time, 15% increase in customer satisfaction
  • T-Mobile (US-Washington)
Significant cost savings through efficiency improvements, 50% decrease in ERP group testting time, 75% reduction in ERP port-prod defects, greater application availability.

These three are the winners of the HP Software Solutions 2009 awards of excellence

It's always nice to hear that organisations have done wonderful things, but the key question is always "So where do you start?"
HP's keynote answer to this was:
  • Operational focus to provide a transformational focus over time.
  • Cost optimisation to move capital to fund innovation initiatives
  • Focus on execution of automation, financial management, virtualisation, and consolidation.
This is to me really just new wrappings and graphics on an existing message. I have to wonder why we need to keep making this message heard - is it just not getting through to IT leaders or are those leaders hearing and getting the message, but unable to execute because of other factors - economy, difficulty in showing ROI, ? Sounds like this area needs exploring - future topic?

Andy followed up with the HP message that they are about solutions, not products. The delivery options they are flogging are in-house, EDS, Cloud Services SaaS, and HP Partners (oh yeah, them.) I'm still waiting to hear something positive about the EDS acquisition from a customer perspective - I'll be visiting their booth on the showfloor later today to see what I can learn.

Andy touched on what's new in the IT services offering from HP, which is "Cloud Assure", IT financial management, IT performance analytics, and IT resource optimisation.

The marketing branding from HP has changed this year to a new four words; Optimise (technology portfolio - IT Mgt Software), Leverage (biz info), Elevate (biz performance), and Improve (customer experience).

Andy discussed about HP support, and claimed that customer satisfaction is at an all-time high based on improvements made over the last two years. A big piece apparently was "in-sourcing" aspects of the HP support organisations - interesting. He did claim that he was under no illusions that things were ideal. He also commented that things need improvement with customers getting stuck at L1 when things don't get escalated timely, and at L3 when software changes that need to be made aren't happening fast enough.

The services organisation was discussed and that it has been tightly embedded into HP Software overall and is using optmisations like knowledge management to increase IP. Andy also noted that he feels this is not in conflict with the partner environment, but I think this statement is at odds with what is actually happening (actions speak louder than words) in particular the changes that have been made to partner status making it basically impossible for independent and small consulting organisations to have partner status (and benefits) with HP.

Andy closed by thanking the audience for their trust & confidence in HP, our time invested here at the show, and enforced his message that HP is proud to be a customers partner, ready to listen and act with the customers to see success for everyone.


Betty Smith VP of Process at John Hancock & President Emeritus with Vivit
Betty started with a discussion about the first HP software (Mercury) project around TestDirector to improve defect management on their internal web site and replacing spreadsheets and access databases with a centralised tool. They then migrated TestDirector to other software bases in a 6 week period once the initial pilot project had completed. SOme impressive numbers to be sure.

Betty discussed the complexities of the John Hancock/Manulife Financial organisation and the desire to drive efficency and competitive advantage.

She highlighted three main points of how this is done:
  • Establish point solutions that provide value - JH does not support long implementations - any job must be finished between 2-6 months.
  • Extend the solution to other areas - cross the silos & work across the organisation
  • Create the longer term vision and focus on a match between IT & business goals. This provides JH an end state that is adaptable but stable, and leads to lifecycle management.

Products suites in particular that JH has implemented include Quality Management, Asset Management - (DDM, uCMDB). Betty claimed an increased efficiency for chargebacks from 3 weeks to one day using these new systems & processes. Performance Centre, and Business Availability Centre were also discussed, both of these are based out of centralised teams that work across all silos to support the business units.

Further, the discussion touched on Service Catalogue and Service manager being centralised and underpinned by uCMDB.
PPM started off as point solutions within numerous business units, but information wasn't being shared well. The PPM project consolidated and eliminated various applications to standardise on a single platform. Another key advantage of the project was that it defined centralised PM practices and processes.

Betty's main claim was that she works off of a simple end-state vision which she shared grahically with the audience. It had some interesting approaches illustrated in the diagram.

Betty discussed techniques that JH used to increase awareness and support including "show & tell" monthly meetings of internal SIGs cross-organisation, developed user forums in sharepoint, and allow for the solutions to be showcased. JH puts on monthly roadshows for the senior mgt level to validate direction and what si importnat to each of the BUs, allow an oppoortunity to adjust priorities, and these are run with individiaul biz units to really understand what is driving them and what areas they can help them improve in.

Betty discussed the regulatory requirements of a financial organisation and the abilities her successful projects have given to free up resources previously committed to audit compliance work - also this allows JH to demonstrate governance of off-shore vendors by having everyone use a centralised consistent solution. Engagement of governing bodies around risk management & expense management is another example of working across biz units. By engaging them in the use of the tools they contribute to the setting of policies and drive the use of the tools as a standard for the organisation.

Betty cited that process is over 50% of a project implementation; making the point that technology doesn't stand on its own without solid process that's oriented to your biz units & directions.

Betty summarised by emphasing that success is achieved by building incrementally focussing on low hanging fruit and creating an end-state vision.

Monday, June 15, 2009

HPSU 2009 Part 1

We spent the past two days (Saturday & Sunday) in my first Vivit Board of Directors meetings. Wow; marathon meetings! But productive. I can't spill the beans quite yet but there's a lot of changes coming fast to the members of the Vivit users group, and lots of positive things for the user community.

I'm just about to run out for dinner, but the post will start fast and furious tomorrow as I attend the keynote sessions with my laptop charged up & online! I look forward to bringing the followers of my blog the latest & greatest near real-time from the keynote presentation, breakout sessions, and tradeshow floor. This year those blog posts will be supported with podcasts as well.

Have a great evening, and blog you tomorrow!

Friday, June 5, 2009

Coming Soon - HP Software Universe

June 15 through 18 I'll be blogging and podcasting almost-live from the HP Software Universe in Las Vegas. Keep your RSS feeds tuned here for the latest news from HP and the various key speakers. Also any news from Vivit the official HP users group will make its way into my posts so there may be some good stuff there too. If there's something specific my readers want me to try and find out while I'm in Vegas at the conference, please email it to me at itmanagecast@gmail.com and I'll be glad to include it.

In fact, feel free to send your questions and comments via email or attach an audio file with your thoughts if you'd like to have it answered in the podcast.

Tuesday, April 28, 2009

The State of ESM in 2009

With our feet now firmly in the year 2009 it's time for a look back - way back - to see where we've come from. I like to use a looking back segment like this as a checkpoint on our way as an industry, as organisations, and as individuals, to see if we're on target with our goals we set.

ESM has matured in the past few years with the popular adoption of ITIL/ITSM to move from Enterprise Server Management, to Enterprise Systems Management, and today; Enterprise Service Management. My opinion is that we're on track here at the highest levels and intentions of ESM because the understanding and adoption of ITIL/ITSM principles into ESM gets us where we've always intended to be - managing and monitoring the services that our IT infrastructure provides to our various customer groups.

The challenge is that the software developers providing ESM software solution have gone off in three directions; one group moving too quickly toward monstrous "ideal" solutions, another focusing on very niche solutions, and the third (where I think they have it right) scoping out existing & proven point solutions to become more mature and process-oriented products.

It's always interesting to me that the big software manufacturers and Gartner (et al) feel that they are setting the direction for ESM through their acquisitions and marketing. The reality is that a lot of what they are providing is just irrelevant for 90% of the organisations trying to put an efficient ESM solution in place. It's great stuff for that top ten percent of customers who are process-mature and cash rich. But for the rest of us, we need more focus on the practical and less on where those in their ivory towers feel that the industry should be heading.

From my experience, most people are still just trying to get ESM right, and get it to fit into their processes. The tools, people, and processes (of course) must go hand-in-hand - but just like you need the right number of people (not too many, and with the right skills) and processes that are tailored to fit the way your organisation operates, you need ESM applications (the tools) that fit the people and processes. Ideally you start with the processes that fit the vision; get the right people the skills they need, and then find the right tools. There are SO MANY choices for ESm products these days that doing the tools last shouldn't be a problem, yet, ironically, it seems so often to be the starting position.

Another gap I've found with the ESM industry in 2009 is that there's a lack of information out there on trends in network management. It's not sexy anymore so it's become the red-headed step child of ESM. More's the pity because it's still a critical aspect and great starting point.

With all the hoopla and focus on (not undeserved, but distracting) ITIL, ITSM, and the federated, unified, CMDB attention has been diverted from interesting things that have been going on in the OpenSource ESM world.

Upcoming ITManageCast posts are going to focus on these areas: trends in network management and trends in open source ESM.

Have a fantastic summer.

Thursday, April 16, 2009

HP Software OMW vs. OMi

Yesterday I was in attendance at a presentation by HP Software gurus and one of the topics was HP Software Operations Manager. One of the items I found a little confusing at first, and thought I'd share with the online community was the idea of OMW versus OMi. Currently, most of the install base of the Operations Manager product from HP software is one of the following:

"old naming conventions" old products (still supported, but not sold)
HP OpenView Operations for Windows 7.5x (OVOW)

"new naming conventions" current products
HP Software Operations Manager for Windows 8.x
HP Software Operations Manager i 8.x

(note that I'm just focussing on the Windows platform for the moment)

As it turns out, OMW & OMi are differing products, primarily in the interface. Keeping in mind that OMW (& OMi) architecturually are split into three peices - agentry, server, and console. OMi uses the same agentry & server components, but layers a new console on these.

Confused yet? :-)

OMW is currently at release 8.10 and will continue with future releases for now (next expected later this year to be 8.15) as fixes/improvements are added to the product. OMi is being developed in parallel, and is an upgrade (free for current OMW support contract holders) to OMW 8.10. Once you've "migrated" to OMi you stay on that platform. HP is not "pushing" customers to OMi at this time, so if you've got OMW running well, and don't see an immediate need for OMi features (to be discussed in my next posting) then don't rush out the door, but start your research.

Please feel free to ask for any further clarification that I can add to my next postings!

Wednesday, March 4, 2009

The eMail You Wish Never Was



We've all done it at least once. Ideally, you only do it once.
You get caught in a moment when you are "up-to-your-eyeballs" when an email comes into your In-Box that is legitimately urgent to someone else, but just not to you at the moment.

At that instant you have some choices... quickly respond with an email back to effectively say "I'll look at this once I have a moment", flag it for follow-up but just don't reply, or pick up the phone for a brief conversation. The first and third options are both quite viable,and common sense dictates that the second option works well for you but puts you in risk of continually receiving more emails.

The problem with the first or third options in responding will come about in HOW you respond. Remember, you're not getting this email while you're casually reading my blog or sipping a coffee at your desk. Imagine yourself at the single busiest point you've been at in the past three years of work; and in executing some of that work you've needed to use your email... while accessing your email to compose a quick note to clarify some work you are delegating you notice "the message" in question.

Now we've set the scenario, and this is where the challenge comes. On 360 days of the year, this isn't an issue, but on one of those TOP FIVE busiest days you have in a year, either the content, tone, or a past interaction makes this email you receive be the straw that breaks the camels back.

Your thoughts run very quickly along the lines of "does this person have any idea how busy I am right now?" or "why is this issue MY problem right now?" I know mine have!

This is the point where you either quickly type up an email response or pick up the phone.

This is also the point where you can unwittingly make a mistake that can take some time to mend.

The reality is that 95% of the time, the sender of the email does NOT know how busy you are, and the issue was obviously of importance to them, but not urgent enough to warrant them placing a phone call to you. Generally speaking, that should be the first indicator that you do NOT need to reply this instant. But, our human nature and sense of ownership of situations as managers urges us to quickly plunk at the keyboard a hasty reply and click send, then blast off to the other 32 things desperately needing our attention at that moment.

An hour (or not even that long!) later you get the phone call that makes you realise that you wish you'd never clicked "send." In your haste, urgency, and certain level of frustration you've typed something you shouldn't have; something that under any "normal" circumstances you never would have, and now you've opened Pandora's box.

So my long-winded story has gotten us to the point where we have two things to cover: what we do now to deal with the situation and how we learn not to get into the situation again.

The only way to deal with a situation like this is to "eat crow." The reality is that while you have correctly perceived that someone else had no idea of how busy, stressed, harried you were at the moment they had electronically requested something of you that you felt was not your responsibility to have to deal with, you also sent off an email without considering or understanding how busy, stressed, or harried this individual was with what they were dealing with at that moment. It's entirely likely that the problem they were bringing to you "isn't yours" but perhaps they felt they had nowhere else to go, and were looking for help (regardless of how that request may have been phrased).

So now, as a responsible manager, it behooves us to go cap-in-hand to the individual you sent the electronic reply to and hold a brief but frank discussion, starting with a sincere apology for your tone, but focused on understanding their issue, helping them understand what you have on your plate, and coming up with some solution. You may not have their answer, but more than likely, once you understand WHY they were asking you in the first place, you can point them to someone else who does have the answers. And have this conversation face-to-face if at all possible; this kind of thing does not translate well over the phone, and further emails will only risk making things worse due to their intrinsic impersonal nature.

And finally, how do we avoid this kind of situation? As I suggested much earlier, if you are truly over-whelmed do NOT send an email or phone the individual without taking five minutes to think through your answer in the context of the question: "What is happening at this persons desk right now to prompt them to send me this email?" This is a great little trick guaranteed to put you in the right frame of mind to be helpful and avoid unnecessary workplace confrontations and stress.