OK, I'm taking a change of my normal pace here and I'm going to be a bit brutal. I just attended a breakout session by Paul Salamone, Technical Architect with Lockheed Martin on
Clearly naming conventions are the key tip &/or trick Paul has to share with us. Paul’s not a very dynamic speaker, and out of the gate we were getting some pretty obvious information; I suppose it's good for anyone who hasn’t played with BAC before. A lot of discussion centred around object and BPM naming conventions for the first 10 minutes…
Adopt a naming scheme
Distinguish your objects from o-o-t-b objects
Use biz unit names in your object names
Use app identifier acronyms
Profile Related Objects
Add the full name for objects seen by users
Just the acronym for objects not seen by users
An interesting conversation spun out of Paul's comment to "
An interesting conversation spun out of Paul's comment to "Put a BPM in your data centre (as well as distributed) – use it as a baseline to compare WAN performance vs. LAN performance." Now while this is standard info provided in any BAC training it triggered an interesting discussion in the Q&A session at the end. Question came up to be hypothetical about having your BPMs only centralised and whether this would be effective, which is a moot point because centralising all your BPMs defeats the purpose of having the BPMs. The only reason I can see for this is saving money by having fewer BPMs. I give Paul full points for arguing against this principle as I agree whole-heartedly with him on this
More of Paul's advice is to use the first couple of weeks to set a baseline for your thresholds and adjust quarterly (I question this, because you should really measure against known business cycles – i.e.: retail, finance, manufacturing, etc.) Al said, it's still standard practice with any implementation.
On the topic of the dashboard Paul advised to restrict the dashboard to trained users – why? To keep users from panic if they don’t understand how the app works. Well, if you set your thresholds well & have a solid promotion and knowledge management supporting the deployment this shouldn’t be an issue.
If you want to use the Geographical Map set the BPM Source Adapter to Transaction/Location – RTFM.
Paul discussed the use of worst-child vs. percentage rule, and why one would be used versus the other – again this is beginner stuff because any experienced OMW/NNM person understands the difference and why it’s important to reduce false positives.
A useful tip from Paul was to use profile names as a way of hiding objects – placing (HIDDEN) or some other key word in the front of the name allows you to use filters to block those items from general public view.
Back to discussion about naming schemes, this time for scripts.
Two useful tips came up under the scripts discussion:
Add logic to script to fail all transactions if one fails
Use multiple service accounts IDs with long password expiration
Paul completed in 30 mins; once we got to Q&A, a question came up about SLM actual response time vs. % good transactions – Paul suggested that they use the actual response time for SLM instead of % good transactions. But didn’t explain why. Conversation moved to principles around how to organise SLAs.
Qty of servers was queried because two disparate environments built up & they are In the process of merging.
Asked about their mechanism for deploying BPMs, and the response is that they build them centrally & ship them out physically with a monitor as a desktop system. Ideally it should be more of an appliance build of a system. They allow the BPM systems to receive any updates/patches that any other workstations do. Again no explanation of why the decision was made. There’s certainly very strong arguments against using the methodology of treating ALL your BPMs as standard corporate desktops…
All in all, I was disapointed - I think the title of Paul's session had me expecting more of a deep-dive into technical gotchas, not advice on naming conventions.