SysAid Blog

Blog Home
Welcome to the SysAid Blog - the place to go to find out where the IT industry is going, and what is SysAid’s role in it.

8 Tips on How to Plan for Configuration Management – Part 1

Posted by on May 30, 2017 in Asset Management
Plan for Configuration Management We’ve all probably heard, and even used, the phrase “fail to plan, plan to fail” but never has this been more pertinent than when planning for a corporate IT configuration management capability – from strategizing, through selling the investment in it, to using it to make a difference to IT and business operations. Ideally, configuration management is one of those ITIL and IT service management (ITSM) processes that should sell itself, but the reality is that it’s much harder to get buy-in for configuration management than say change management. Then, once an initiative has been started, configuration management is an ITSM capability that can quickly spiral out of control, consequently failing to deliver on the promised positive business outcomes, and thus it needs very careful planning in order to be effective. Don’t worry though, this blog contains the first four, of a total eight, top tips for creating an effective configuration management plan, one that you’ll have no trouble “selling.”

Tip 1: Start With the Basics

Start by explaining that ITIL’s service asset and configuration management (SACM) is the process (or capability) responsible for ensuring that the assets required to deliver IT services are properly controlled, and that accurate and reliable information about those assets is available when and where it is needed. This information includes details of how the assets have been configured and the relationships between assets. It's also a case of setting the scope and boundaries for configuration management – what it is and isn’t, and what it will and won’t do. Thus, the plan’s introduction (or a referenced appendix) should include an overview of how configuration management will work in your organization, along with objectives and key outcomes. This will likely include:
  • Roles and responsibilities
  • The interfaces into other ITSM and business processes
  • The technology employed, and how it works with other IT management and business systems
  • The reasons why configuration management is needed
  • How configuration management will change things for the better
  • Some anticipated outcome statements such as fewer change delays or issues, quicker incident resolution, or more efficient capacity planning

Tip 2: Set Out Your Scope

One of the primary reasons configuration management initiatives fail is because people try to do too much too soon. So setting the right scope is a key part of your plan. When planning for configuration management, there are two key approaches:
  1. Broad and shallow – knowing a little about everything
  2. Thin and deep – know a lot about a limited number of things, often the most business-critical or problematic IT services.
The latter approach is usually the one taken, as it provides maximum value early on. Then there are three basic layers to consider:
  1. Inventory
  2. Assets
  3. Configuration Items (CIs)
3 layers of configuration management The inventory (management) layer takes care of consumables: keyboards, mice, CD drives (if still used), power cables, USB sticks, security peripherals, etc. It’s all about equipment that needs to be tracked for monetary value and to make sure that it’s in stock…but that’s about it. The next layer is assets or asset management. Example assets include: PCs, laptops, and printers. This is still the stuff that we need to keep track of for monetary reasons and to make sure they’re in stock when needed, but we also need to know locational information and how they are supported. The final layer is (configuration) items under the control of configuration management, which are often the important items that make up your critical IT and business services. Servers, network devices, and business applications are all CIs that should be under the control of configuration management to ensure that they are managed, supported, and subject to the appropriate change control.  

Tip 3: Agree on a Suitable Naming Convention

The plan should explain any proposed, or agreed, naming conventions or nomenclature. Confused? Every CI or item that’s under the control of configuration management should have a unique name or identifier. When I’m tasked with implementing a configuration management capability from scratch, I try to use a naming model that will make it easy for the CI to be identified. So I tend to use something like the following: Type of Service – Location – Level of Complexity So a WinTel server based in New York requiring third-line support would be WinTel1234-NY-L3 and a business application located in Dublin requiring first-line support would be BusApp1234-DUB-L1, and so on. In terms of naming conventions, it doesn’t matter what logic you use, as long as everything has a unique identifier.

Tip 4: Know Your IT Estate

A key part of your plan is the baselining process, taking a snapshot of the critical services and their key dependencies so that we know exactly what makes up the IT or business service. The purpose of a baseline is to take a measurable part of the service so that it can be added to a CMDB (configuration management database) or CMS (configuration management system). Since it’s a snapshot of the state of a service at any given point in time, it can be used as part of change management activities. Because, if the service has changed, there will need to be a valid, authorized change request against it, as well as being a stable reference for future planned change works. However, don’t fall into the trap of trying to capture too much data at first – you can always build things up later. If you try to go into too much detail when starting out, you might run out of time and money before achieving any communicable success. Also bear in mind that the more detail you capture, the more work it will be to maintain it. Here are some useful CI attributes to capture as a starting point:
  • Unique Identifier
  • CI type, for example: server, network device, software package
  • Version number
  • Support details
  • Vendor details
  • License details
  • Purchase date
  • Owner
  • Location
  • Status
  • Warranty details
  • Relationships to other CIs
So that’s my first four configuration management planning tips. Want more tips? Please come back soon for the second part of this blog.
Continue reading

How Cloud Changes ITIL Capacity Management from the Vague to the Specific

Posted by on May 23, 2017 in Cloud
ITIL capacity management from the vague to the specific Don’t be fooled – by the cloud tech-talk of instances, databases, code, and APIs – into thinking that cloud is all about technology. Unfortunately, this kind of tech-talk can convince IT service management (ITSM) professionals into thinking that cloud is just another technology evolution with zero impact on ITSM. However, those that realize cloud is all about business, applications, service, and operations will understand the impact across the ITIL ITSM best practice framework – in particular, the impact on capacity management. Cloud requires a different capacity management process to that used for traditional, on-premise IT services. The key changes are in the following five approaches, all of which make capacity management more granular, and move from the long-range “vague” to the short-range “specific”:
  1. Forecast horizon is shorter
  2. Speed of change is faster
  3. Blend of resources is finer
  4. Automated changes in response
  5. Balance of CapEx and OpEx
An ITSM professional that understands these five cloud capacity management approaches will be a huge asset to any organization, measured in terms of the business bottom line as well as service quality.

1. Forecast Horizon is Shorter

Buying the full IT stack for on-premise IT service delivery is a long, difficult, complex, and expensive process. Want to know how long? It takes months, maybe nine-to-twelve months is standard, to design, procure, and deploy any reasonably complex system on-premise. Once procured, it has a lifetime of three, five, or seven years. Maybe longer. This is the long, long, length of the on-premise capacity management horizon. Over that time, capacity is over-provisioned for peak workloads and this over-provisioning burns money. One might as well be throwing dollar bills out of the window. But in the traditional IT operations spirit of “I only get fired for outages,” capacity management thinking prefers to avoid under-provisioning that can hurt customers and therefore the business. A capacity manager doesn’t have this many-year-horizon with cloud services. The capacity manager now only needs to forecast ahead as far as the time it takes to add more capacity to the cloud services and that is, on average, around 15 minutes from decision to deploy, including the time to make a coffee and get comfy in front of the console. Cloud capacity managers additionally do longer predictions to save money by purchasing reserved cloud capacity, sometimes saving over 60% in costs. So capacity management still has a role to play in longer-forecast planning but it’s now about financial efficiency, not the avoidance of disaster.

2. Speed of Change is Faster

As if predicting capacity changes wasn’t hard enough, responding to them is difficult in non-cloud systems. Capacity managers cannot quickly respond to unplanned changes in demand if it takes months to procure and deploy capacity on-premise. The brand is then damaged and customers leave if the IT service is down or the business can’t adequately process transactions during highly-visible seasonal fluctuations such as summertime or Christmas (when, unfortunately, many staff are off work). Cloud components can be scaled quickly and even large amounts can be done in a few hours (10,000 VMs anyone?) with some extra communication with the cloud service provider. Plus, the business can scale down quickly too and turn off all of that excess capacity when the seasonal fluctuation subsides. This can’t be done on-premise, it can only be done in the cloud.

3. Blend of Resources is Finer

On-premise systems might be measured by the number and size of datacenters, comms rooms, and racks. Adding a server might mean adding another rack. That might mean adding another switch, and another rack. Which then might mean extending the closet or room, or even the datacenter. To avoid hitting these capacity potholes, long-range capacity management forecasting is done to provide more capacity well ahead of the predicted demand. This is a standard enterprise “best practice” approach that’s wasteful and expensive. In the cloud, it’s possible to keep on adding VMs without worrying about any physical infrastructure or other capacity limits – and so now the granularity of capacity is one virtual machine. If it’s possible to use higher-order cloud services such as AWS S3 storage, then operations are further removed from storage capacity considerations as these are so scalable a normal enterprise will never hit the limits – and no capacity management is required in the traditional sense. Capacity management now moves to the question “How efficient are we being with our used capacity, can we save money?”

4. Automated Changes in Response

Responding to expected and unexpected demand causes much stress for a capacity manager. For instance, in a typical fixed-size, on-premise IT system there are physical limits to the processing capacity. The normal behavior when capacity demand exceeds current supply is to push out or de-prioritize non-production workloads – something has to give. But what if getting the new product live is also business critical, and that’s what the non-production workloads are doing? Is the unplanned production capacity demand now delaying an important product release, promised to customers already through advertising and other communications? In the cloud, this is handled differently. Capacity managers can use automated systems such as AWS EC2 Auto Scaling to manually, by schedule, or dynamically add capacity, such as more compute or more load balancers. The only upper limit to capacity supply is how much the business can afford to spend.

5. Balance of OpEx and CapEx

Pay-as-you-go (PAYG) is one of the five essential cloud characteristics. This consumption-focused purchasing method means that you can align operational expenditure to business need via only consuming the cloud services you need. The alternative approach with on-premise is purchasing hardware and software, and owning (and managing) these assets for a three, five, or seven-year period. Some organizations have budget arrangements to annually plan spend against capital expenditure. This can also be done with the cloud with mix-and-match reserved capacity (annual) and PAYG (on demand). This allows capacity managers to cater for mostly-steady but occasionally-“bursty” workloads. The other demonstration of mixing OpEx with CapEx is in the so-called Hybrid Cloud model – mix the CapEx-laden on-premise systems with OpEx-savvy public cloud – handling the steady-state workloads on-premise; and the fluctuations in the public cloud. If you can achieve this technically, architecturally, and operationally that is. Capacity management is still important, but different, when it comes to cloud. The old constraints are different and a modern capacity manager is now constrained only by budget (and its efficient use) and a workload’s ability to exploit cloud architecture for auto scaling.
Continue reading

Yes, WannaCry Makes Me Want to Cry

Posted by on May 16, 2017 in General IT
#WannaCry ransom cyber attack One of my biggest fears as an IT manager is coming to work one day and finding out that my company network was hit by a ransomware attack. So you can imagine my reaction when I read the world news on Friday (May 12th) regarding the wide ransom attacks that were taking place and affecting very large institutions, places where you’d think they were completely covered with their security standards. As it turned out, this was happening worldwide but the big hit was felt mostly in Europe.

Ransom Run by the Rotten Mafia

By now we all know the name of that ransom attack is “WannaCry” (official name Ransom:Win32/WannaCry) and like all other ransomware attacks, it encrypts files on an affected computer as well as any other network files that are available for that computer. After the encryption, the hackers of this ransomware leave a text message or some kind of a note notifying the user that their files have been encrypted and the only way they can get their data decrypted is to pay cash, and then the “nice” hackers will send over a key to unlock all their files. It’s literally a ransom situation by the mafia in cyberworld! After reading a bit on the WannaCry ransomware I understood that the best way to protect ourselves from such an attack is to deploy the Microsoft Security Bulletin MS17-010 fix, which was released in March 2017. Yup, not that long ago….hence many organizations and individuals had not done so before the insane cyber attack over the weekend.

Can You Say Patch Management?

As the IT Manager at SysAid, I’m using all the SysAid tools (but of course!) to manage my IT services and support, and that includes SysAid Patch Management, which helps to keep our  Windows-based servers and PCs always up-to-date with the latest security patches/updates. This means that all of my users’ computers were being patched on a regular basis and that the necessary security fix was already deployed. Thank goodness! To feel completely secure, as an added precaution, I logged in to my SysAid Cloud console so I can asses the deployment of the MS security fix across my network and with a simple report I was able to see which computers had the security fix installed and where it was missing (there are various reasons why a mass patch deployment can fail). Then, with this data from my report, I was able to directly attend to those computers and make sure that the security fix was properly installed on them. To make a long story short, as with all IT-related security issues, one of the most important things is a fast response! Finally (a shameless plug), I’d love to share with you this entertaining video that my marketing colleagues put together so you can understand how SysAid Patch Management works:
Continue reading

The Emerging Cloud Service Delivery Manager Role

Posted by on May 9, 2017 in Cloud
The new cloud service manager role Clouds are services, not products or technologies, so who better to manage them than an IT service manager with a “special set of skills.” Let’s call them Cloud Service Delivery Managers. The type of organization that has heavily invested in IT service management (ITSM) is likely to be the “complicated” kind of IT organization that uses many cloud service providers to provision IT services. And while even the smallest, “simplest” organizations might be using multiple clouds for business and are struggling to manage all the different cloud bills, user accounts, and integrations – imagine that pain times-a-thousand. It’s the reality for these complex organizations as they juggle cloud, DevOps, ITSM, and possibly even service integration and management (SIAM).

Cloud Management Complexity

According to the RightScale State of Cloud 2017 Report, which is based on a survey of over 1,000 practitioners, the cloud “situation” is getting increasingly complicated as cloud pushes into all aspects of business. From end users using cloud storage for work files, and cloud mail for email, to now replacing whole data centers with large cloud service providers – cloud is everywhere! Plus, DevOps loves cloud. Thus, the challenge that all organizations have, regardless of size, is cloud service management. This ranges from consolidating the bills from the various services into finance all the way to some service administration and being the central point for standards and compliance. And the secret is to not “get in the way” while simultaneously de-risking the consumption of cloud services for the business. Without someone like a Cloud Service Delivery Manager taking ownership for the above it will be the responsibility of each department, product team, or even individual end users. This might be perceived as great by freedom-seeking individuals who finally feel unshackled from IT – until, that is, the bill isn’t paid, confidential data is leaked by an ex-employee, or services can’t communicate – causing ever-increasing cost and pain. This Cloud Service Delivery Manager role is an emerging reality, based on real organizational needs related to cloud computing. In this blog, I want to explore the needs driving the role and what the role actually entails.

The Driving Needs for Cloud Service Delivery Managers

All cloud services share similar characteristics and these can be seen differently through the eyes of a cloud optimist or cloud pessimist:
Essential Cloud Characteristics The Optimist The Pessimist
On-demand self-service Do it all by myself! No IT personnel needed! No IT involvement! No controls on end users!
Broad network access Apps are available from anywhere Apps are available from outside controlled environments
Resource pooling Lower cost by sharing public resources Noisy neighbors and risk of exposing business data
Rapid elasticity I can balance business demand with cloud supply, scale to whatever I need Scale up but forget to scale down, wasting money, with unpredictable bills
Measured service I can see exactly what I’ve used Multiple reports from different cloud service providers need reconciliation
At the heart of the Cloud Service Delivery Manager role is balancing these two perceptions, which brings us onto: What do Cloud Service Delivery Managers do?

The Roles and Responsibilities of a Cloud Service Delivery Manager

A Cloud Service Delivery Manager’s goal is to be at the center of tension between opposing drivers in the business. On the one hand, you have the business governance requirement to ensure risks are managed and money is spent wisely. On the other hand, you have business departments and individuals who want the empowerment and agility to create new business opportunities and revenues. These two needs are often at odds because control can hamper agility, but not always – as the great IT Process Institute book, Visible Ops, once said: “Change (control) is like the brakes on your car – it lets you go faster!” This quote is also used in the context of DevOps and increasing velocity. Thus, the role of the Cloud Service Delivery Manager is to mirror this statement – to operate in this center of tension and to balance the opposing needs of agility and governance. But how? The Cloud Service Delivery Manager needs to have responsibilities across five key areas:
Cloud Service Delivery Manager Responsibilities
Billing ➢ Consolidate all cloud bills for payment by finance ➢ Implement mechanisms to keep downward-pressure on spend such as turning of development resources (where unused out of business hours) and limiting subscriptions to those who need it for their role
Users ➢ Manage end-user adds, edits, and deletes to subscription services through a central directory ➢ Manage the on-boarding and exiting of staff in terms of cloud service subscriptions ➢ Run compliance reports on cloud services users and usage
Services ➢ Service catalog ownership and control
Compliance ➢ Educate cloud consumers on their rights and responsibilities ➢ Advise cloud service administrators on compliance requirements ➢ Run compliance reports, enforce standards and disciplinary policies
Integration ➢ Maintain the map of business processes to services ➢ Assist with integration of cloud services (but the Cloud Service Delivery Managers don’t get in the way)
The secret to the success of this role is definitely “don’t get in the way” while mitigating risks and continually developing, monitoring, and enforcing standards and compliance. As soon as the Cloud Service Delivery Manager makes the mistake of being a choke point where “all things cloud go through me” then it will introduce slowness and complexity into cloud service consumption, people will go around the Cloud Service Manager and the organization’s cloud consumption will splinter. There are levels of maturity in cloud service management that can be understood by using a simple checklist, something the Cloud Service Delivery Manager will do with each cloud service:
  • What do I know about the cloud service? Is it invisible vs. on the radar vs. under control?
  • Are we collecting metering and bills? Is finance paying for it, or is it on an employee credit card?
  • Are we managing the end users? If the end users are not linked to the corporate directory, then this is a red flag risk.
  • Is this service understood and available via on-boarding process via our service catalog?
  • Do the cloud service consumers and administrators know their rights and responsibilities? Are they compliant? How can we help them enforce the rules? Do we have a reporting, escalation, and disciplinary process? Do we encourage whistle-blowers and white hat hackers?
  • Is this service integrated into other services? What are the dependencies and what’s the business risk?
  • Across all of the services, what’s the score of these services and where is it in our priorities for resource to improve?
The Cloud Service Delivery Manager could be a key role in your business as cloud services become all pervasive across the organization. Doing it badly will have a negative impact on business operations and success, such as if it becomes a choke point, but not doing it at all and letting individuals expose the business to known cloud risks with no guidelines could be more than negative – it could be catastrophic.
Continue reading

How Do You Measure IT Services?

Posted by on May 2, 2017 in ITSM
How do you measure IT services? All the IT service providers I’ve worked with assure me that they measure the services they provide. They use metrics and KPIs to do this, and have service-level agreements (SLAs)  with their customers, which they use to document what’s been agreed. Unfortunately, most of the metrics and KPIs that I’ve seen only measure and report the things that service providers can control, and not the things their customers actually care about. This tends to result in reports that show service providers meeting most, if not all, of their targets, even when customers are distinctly unhappy about the service. This happens so frequently that the phenomenon even has a name: the “watermelon SLA”. When you look at a watermelon from the outside, all you see is green.  Delve a bit deeper and you discover that most of the fruit is red.  In the same way, we report that everything is meeting its targets (it’s all green), but if you delve into the customer’s experience you find that it’s not very good (it’s mostly red). So, what can service providers do that will help them to delve that bit deeper and focus on what matters?

What’s Important to Customers?

When service providers think about any service, they need to pay attention to what’s important: the value of the service, the intended outcomes, the costs, and the risks – together referred to as VOCR. Here is a brief summary of what we mean by these terms.
  • Value is a measure of the benefits that the service creates for the customer. This could be in terms of money, but it might be something like lives saved, or some other indication of the customer’s mission.
  • Outcomes are the things that the customer achieves as a result of receiving the service. For example the service may help the customer to manufacturer products, or communicate with their partners, or collect payments on a web site.
  • Costs are the money that the customer has to spend to achieve the outcomes they want.
  • Risks are possible events that could affect the ability of the customer to achieve the desired outcomes.
In my view, if every service provider produced reports showing the VOCR of their services, then customers would have no difficulty understanding what these reports mean, and judging whether the service was good value (or not). Of course, it’s not that easy. The trouble is that value, outcomes, costs, and risks are only partially under the control of an individual service provider. For example, consider an IT company that provides a manufacturing support service to a car manufacturer. The outcome that the customer wants might be “a car comes off the production line every 40 seconds” (my thanks to James Finister for this example), but this outcome is not completely controlled by the IT service. Even if the IT service works perfectly there could be other reasons why a car doesn’t come off the production line. So, what should you measure and report in this case?

What Should You Report to Your Customers?

Under these circumstances most IT service providers would probably report things like:
  • Availability of the IT service
  • Number and severity of incidents
  • Average time to resolve incidents, subdivided by category
The information provided might be accurate, but it is very IT specific. It represents an IT service provider’s view of the world. But of course, this isn’t what the customer cares about. The metrics that matter to the customer are how many cars come off the production line (outcome), how much extra cost do they have due to IT failures (cost), how much lost production might they have next week (risk), and derived from these, how much profit can they make selling cars (value). So, is it possible to create IT metrics and reports that actually show all of these things? Yes, it is. An ideal SLA could state the following:

“The desired outcome from this service is to support the production of one car every 40 seconds.”

The monthly report could then show whether this was achieved. Similarly there could be report sections on cost and risk, showing how well these have been managed for the customer. Does this mean we stop measuring IT service availability, number and severity of incidents, incident resolution times? Obviously not. Providers can’t do their jobs well without this information. What I am saying is that these are not the key things to report to the customer. The main findings of any report to customers should be about value, outcomes, cost and risk; IT-centric issues should be reported within this context, as contributing factors, and at whatever level of detail suits the specific customer. Here are two more examples of IT organizations that have great business-focused reports:
  • One client that I worked with had a key business metric stating that they should never lose any customer data (the data represented a lot of money). It didn’t matter if the data was lost because a paper document got lost before it was scanned, or because of an IT error. The thing that mattered is that every piece of customer data must be correct and accurate at all times. The reports they produced started by summarizing the status of customer data, THEN they went on to account for any risks or issues that had occurred that month. Some of these were IT related, others were down to the various business units, but the report included everything.
  • Another client was responsible for shipping parcels to end customers. Their key metrics were the percentage of parcels that arrived on time, and the percentage of parcels that were lost. As in the other examples, these things might be caused by IT failures, or they might be due to many other issues, but the IT reports focused on the business metrics, and then on how IT had contributed to these.
What about your SLAs and customer reports? Are they about IT or are they about value, outcomes, cost, and risk? Maybe it’s time to think about what your customers really care about, and create SLAs and reports that match their view of the world.
Continue reading

The 7 Deadly Sins of Change Management

Posted by on April 25, 2017 in ITSM
7 Deadly Sins of Change Management Many businesses, and IT organizations, become frustrated with a lack of agility and responsiveness with their change management process. Rather than being viewed as a “value enabler,” the change management process is often seen as overly bureaucratic and a hindrance to getting things done. But in my experience, these issues usually boil down to a poor implementation and misunderstanding of the purpose of change management.

What Is the Purpose of Change Management?

The change management process has three primary purposes:
  • To ensure the appropriate planning, review, coordination, and communication of a change. While not all changes are created equal, all changes must have the appropriate degree of planning, evaluation, approval, orchestration, and communication. Without these elements, there can be no control over the managed environment, which ultimately means that the business cannot rely on IT.
  • To protect what’s already in production. A change must have no negative impact to services that are already in the managed environment.
  • To ensure that a change delivers the intended result. The whole reason why a change is being made is to deliver a planned result. If a change is implemented, but it does not deliver the intended result, this points to larger issues that must be addressed.
Seems rather straight-forward and common sense, doesn’t it?  So why do so many change management implementations result in frustration, subterfuge, and headache?   Perhaps you’ll recognize some reasons in my list below.

The Seven Deadly Change Management Sins

  1. Every request for change has to go before a change advisory board (CAB). Out of all the sins, this is the one that I see most frequently. Because change models and evaluation criteria are not defined, *every* request for change – regardless of complexity, resource needs, or impact – gets dumped onto the CAB.
  2. No true management support. This is my second-most encountered issue with change management implementations. I believe that management – especially senior management – must exhibit strong, visible support to ensure an effective change management process. But what I often find when it comes to enforcing policies and supporting the process, is that senior managers do not want to (visibly) enforce the very policies and process that they commissioned!
  3. Request for Change (RFC) not raised…until just before implementation. This behavior essentially guts the change management process by bypassing the crucial initial review, evaluation, planning, and communication of upcoming work. This action in turn, has a cascade effect on work that has already been planned, resulting in resource conflicts.
  4. No one, other than the change manager, has any accountability for the success of the process. I frequently encounter the misconception that once an RFC is logged, then it’s the responsibility of the change manager to coordinate all of the activities related to the change. The fact is that the change manager is accountable for the operation of the process – and ensuring that the handling of each RFC follows the process – not the design, build, testing, and implementation of a change itself.
  5. The change schedule is not published outside of IT – sometimes, it’s not even published *within* IT. The change schedule is intended to promote communication and transparency, not only regarding the change management process, but also regarding the demand and workload within IT.
  6. CABs do not have the proper membership. CABs are often just made up of IT personnel, with no participation from business colleagues.
  7. Process over engineering. When discussing change management, I often use the analogy of the control tower at an airport. The control tower ensures that at any given time there is one and only one airplane on any given runway at the airport. The control tower does not pilot the plane, does not manage the loading and unloading of passengers and cargo, nor the servicing of the aircraft, and so on. Those activities belong within other roles and procedures. The control tower’s job is to ensure safe landings and take-offs. A good change management process is like the airport control tower – but unfortunately, many change process definitions include all of the other “airport operations” that really shouldn’t be there.
Any of the above sound familiar? It’s no wonder that our business partners are frustrated. It’s no wonder that IT is frustrated.

Seven Things You Can Do to Fix It

If your change management process is suffering, here are seven things you can do to fix it:
  1. Define change models. A change model is simply a pre-defined way of implementing a change. Executing the steps within a change model is often the fastest way to implement a change.
  2. Define and enforce the responsibilities of the change owner. The change owner is accountable for the success of a change, not the change manager. It is the change owner who is responsible for capturing the requirements for a change, ensuring the design and build of a change, defining and executing appropriate testing, and ensuring a smooth implementation of a change.
  3. Define evaluation criteria. Not all RFCs should go before a CAB for review and authorization. In fact, in a well-defined change management process, most RFCs should be authorized by someone other than a CAB. Defining evaluation criteria helps ensure the “right” RFCs go before the “right” people as defined in… (see my #4).
  4. Define a change authority matrix. Defining and following a change authority matrix identifies who the “right” people are to review and authorize RFCs. Have your senior management review, approve, and sign-off on it – this helps enforce accountability – and get management commitment.
  5. Publish the change schedule – to everyone. Not only will this improve change management awareness and communications within IT, it is a great way to illustrate how much work is getting done by IT.
  6. Produce and publish metrics that make sense to the business. While publishing the number of changes implemented within a timeframe is nice to know, measuring and publishing the business improvements resulting from the implementation of a change is much more meaningful. For example, if a change is intended to reduce order processing time – measure that!
  7. Remove any non-value added work and wait time from the process. For example, why arbitrarily conduct CAB meetings on a weekly basis? Take a page out of Agile and use the daily stand-up meeting like a “CAB meeting.”
Is your change management process not producing the results your organization needs?  Even worse, is your change management process in the way of making changes? Don’t give up – these seven fixes are sure to move your change management process from being the “barrier” to being the “value enabler”! If you’d like more tips, I highly recommend Stuart Rance’s blog listing his top tips for supercharging your requests for change. And for the beginners out there, Joe The IT Guy does a great job explaining the basics in his blog ITSM Basics: A Simple Introduction to Change Management.
Continue reading

4 Reasons Why ITSM is a Key Investment for SMBs

Posted by on April 18, 2017 in ITSM
ITSM investment IT service management (ITSM) is not just a “big company” opportunity – as both the need and the benefits are not solely dependent on the relative size of an organization’s IT estate, employee numbers, and/or budgetary power. Small and medium-sized businesses (SMBs) should also look to ITSM as a route to better quality IT services and support, increased efficiency and effectiveness, reduced costs, and a better employee/customer experience. Plus, ultimately for better business outcomes, enabled through the delivery of better IT. And while the economies of scale might mean that there’s greater potential for financial savings in larger organizations, the limited resources (money and people) of SMBs means that ITSM might actually have a more significant impact. As ITSM empowers SMBs to “do (and deliver) more with less.”

Arguments as to Why SMBs Don’t Need ITSM

This has already been touched on in my opening section, but it’s worth taking a closer look – for example, that SMBs potentially think:
  • “We don’t spend enough on IT to make ITSM worthwhile”
  • “We don’t have enough people to do ITSM”
  • “ITSM will cost more than it saves”
  • “We don’t need the 26 processes and four functions of ITIL”
I could go on, as it’s easy to offer many reasons – or excuses – as to why SMBs don’t need ITSM. But many of these reasons are shortsighted, overlooking the fact that ITSM investments can be as big or as small as you need them to be. And, as with any reasoned investment, the returns will outweigh the costs when scoped, planned, and executed correctly.

Four Reasons Why SMBs Need ITSM

1. Modern Companies Are Totally Reliant on IT

Most, if not all, companies now need IT just to operate, let alone for competitive advantage. But many do need IT to differentiate themselves and to win and retain business, with IT a valuable part of the overall business jigsaw puzzle. And this is regardless of size – from large enterprises to SMBs. The IT organization/capability, whether a cast of thousands or just one person and their cat, plays an important role in keeping the business operating – from employee productivity to customer-facing IT systems. Investing in ITSM best practice helps to ensure that IT services are designed, delivered, managed, and changed in an optimal way – providing the best quality of IT service at an acceptable, maybe even optimal, price.

2. IT Support Needs to Always Bring its A-Game

Without ITSM best practice, SMB IT capabilities are at risk of “blowing in the wind” (usually where the loudest voice gets attention first), or failing to prioritize workloads correctly, i.e. the easiest things get done first to keep work volumes low. In either scenario, IT and particularly IT support isn’t bringing its A-game – no matter how talented the involved people are. ITSM best practice helps SMBs to best structure their ways of working such that the most important needs/issues are dealt with first through to the least important service requests. Plus, if applicable, the use of a fit-for-purpose ITSM tool not only supports this through priority matrices, workflow and automation, and knowledge management – the introduction of a self-service capability can allow end users to help themselves with the simpler issues and information requests. Thus, freeing IT staff up to focus on more complex, and potentially more important, things.

3. Change Can Do More Harm than it Helps

ITSM isn’t just about the service desk. Although some of the other ITSM best practices do link in with, or affect, IT support – a good example being change management. Why? Because failed or poorly executed changes can place an additional burden on IT support staff through high levels of change-related incidents. And working in IT support is hard enough without colleagues creating even more IT issues to deal with! So, look to change management best practice to understand:
  • How best to balance change risk and speed.
  • How to involve relevant business parties in change decisions when the risk and impact of change failure is high.
  • How best to prioritize and schedule changes for optimal business effect.
  • And how to employ change models to facilitate the swift delivery of low risk and low impact changes.

4. People Are Under Pressure and Potentially Underperforming

Without ITSM best practice or, more specifically, a fit-for-purpose ITSM tool enabling ITSM best practice, it can be hard to ascertain individual and team performance. For instance, people might seem busy as they run around fixing things – but are they really making best use of their time and performing well? It can be hard to gauge performance when issues and requests are “managed” in email inboxes, spreadsheets, and people’s heads. There’s no way of knowing how long things took to resolve, whether IT support productivity is high enough, or – probably even more importantly – whether resolutions were delivered in line with service level targets and end-user expectations. ITSM best practice and tooling can help considerably here, allowing management as well as IT staff to understand where improvements can be made (at both an individual and team level) and to help ensure that limited IT resources is used optimally across the SMB’s spectrum of IT service delivery and support needs. So, that’s my four reasons as to why SMBs should invest in ITSM. Would you agree? What would you add?
Continue reading

A Beginners Guide to Serverless Computing

Posted by on April 12, 2017 in Cloud
Serverless Computing The “serverless” computing paradigm emerged a couple of years ago in Amazon Web Services (AWS) as a new cloud service called Lambda – a serverless computing platform. And now, even though it’s mainstream, it nonetheless can still be new or unknown to many in the IT industry. This blog explains and visualizes serverless for IT professionals who are new to the topic, covering:
  1. The misleading serverless name
  2. Serverless on a napkin
  3. Comparing serverless to other computing paradigms
  4. The serverless ecosystem
  5. The serverless sweet spot
So please read on to learn more about serverless.

1. The Misleading “Serverless” Name

Why is the name “serverless” misleading? It’s because code still needs to run on a server somewhere. But in the case of serverless, it’s not your server. With serverless, the cloud service provider (CSP) is doing all the low-level infrastructure work for you. You don’t see the servers (even though they’re out there somewhere). So, therefore, from your perspective, there are no servers to manage, hence – it’s serverless. There has been much rhetoric about the name. Some people have wanted to change it to functions-as-a-service (FaaS), which is very accurate but it didn’t stick. Maybe we are now all aaS-ed out? So “serverless” it is then. And there’s even a series of global serverless conferences now.

2. Serverless on a Napkin

There are five simple steps to understanding how serverless works, under the covers, but first, the headline – serverless is an event-driven compute paradigm. It works as follows:
  1. You write and upload your code.
  2. You define the triggers.
  3. An event triggers your code.
  4. Your code does one thing.
  5. The end.
You can log in to AWS directly and do this manually but, in reality, this will be part of an automated DevOps pipeline, integrated into other systems such as AWS S3 storage, Dynamo databases, and SQS queues. If you’re an artist, and have a big enough napkin, you could try to draw this example application of serverless on AWS while you sip on your gin and tonic. Example application of serverless on AWS

Image source: AWS

To go beyond the napkin, head over to the Serverless Framework to dig deeper into the topic thanks to the site’s excellent content.

3. Comparing Serverless to other Computing Paradigms

Over time, moving from left-to-right in the image below, organizations are increasing their use of:
  • Cloud services to cede control to CSPs
  • Smaller batch sizes to reduce change scope and increase change frequency, and
  • Small services at lower cost at all scales.

Spectrum of compute paradigms

Image source: ViewYonder

4. The Serverless Ecosystem

So where do you buy these serverless “things”? As serverless lives at the bleeding-edge of the technology wave, it’s still a mixed, and changing, bag. The table below lists some popular serverless offerings:
Offering Cloud? On-Premise? Launch Date Languages
AWS Lambda Yes No 2014 Node.js, Python, Java
Azure Functions Yes Yes 2016 C#, Node.js, Python, F#, PHP, Java
Google Cloud Functions Yes No 2016 JavaScript
IBM OpenWhisk Yes Yes 2014 Node.js, Swift

5. The Serverless Sweet Spot

Everything isn’t going to go serverless, and rightly so, because there are some use cases for which it doesn’t make sense. The diagram below shows the current sweet spot for serverless:

Serverless sweet spotImage source: ViewYonder

  Latency is the time measured from the trigger of your code until it completes. This can vary depending on how CSPs load and run your code. Unchanging, infrequently accessed code can be “unloaded,” or “spun down,” but this means more latency when this code is run because it has to be reloaded and “spun up.” A good rule of the thumb to use when considering serverless is that: the more sensitive and more-frequently accessed your code is, then (at high levels) it might be better to run it on dedicated hardware somewhere, with you managing the server. In terms of the serverless sweet spot, here are some examples of application types: So, there you have a quick guide to serverless. If you already know about serverless, what would you say are the most important things to know (and understand) about it that I haven’t already called out?
Continue reading

5 Tips to Help Prioritize Your CSI Improvements

Posted by on April 4, 2017 in ITSM
Prioritize Your CSI Improvements I have often said that, in our rapidly changing business and technical environment, continual service improvement (CSI) is the most important service management process. If you don’t keep improving what you do, then you don’t just stay still, you gradually fall behind. This happens because:
  • Your competitors keep improving, which causes customer expectations to keep rising even if you don’t improve.
  • Your customers’ needs evolve, hence delivering what they used to need no longer delivers the value they’re looking for; to keep up you have to deliver what they need now.
There are many well-publicized examples of organizations that failed to adapt to a changing environment and so went out of business.  Here are some ideas of how your IT staff can contribute to help ensure your company doesn’t join them.

Identify Your Improvement Opportunities

Before you can prioritize improvements, you need to identify what improvements you could make. It’s surprisingly easy. Create a CSI register for logging and tracking improvement suggestions, and then:
  • Ask IT staff what improvements are needed. The people who do the work always know what’s problematic, and what needs to be improved. When I work with IT organizations, I always ask people what needs to be improved and they invariably give me a long and accurate list.
  • Ask customers what improvements are needed. Do you really know how your customers experience your services? Do you know what they love and what they’d love you to do better? This is even more important than asking the people who do the work. Every organization should ask their customers what they like about the services they receive, what they want more of, and what they dislike; and they should do this regularly. Ideally you should have business relationship managers to do this, but even if you don’t, someone needs to talk to your customers to find out what they would like to see improved.
  • Review your metrics to identify trends that could cause future issues. If you are measuring and reporting on targets, either internal targets or customer-facing ones, then you can use trends to identify where you need to intervene to ensure that the targets are met. This is much better than waiting until a target is breached before trying to fix the issue.
Make sure that you log and track all the improvement suggestions you identify.

Prioritizing Improvements

When you first create a CSI register, you will almost certainly find you have identified a very large number of things that need to be improved. It’s easy to feel overwhelmed, and to be uncertain about where to start. Here are some suggestions to help you prioritize your improvement opportunities and feel confident that you are starting with the right things.

1. Limit Work in Progress (WIP)

The first thing to think about is how much capacity you have for making improvements. There’s no point in starting work on 20 different improvements, and then running out of time, money, or other resources so that nothing gets finished. Assess how much improvement work is realistic, given your circumstances, and help yourself succeed by making sure you don’t start too much. If your team uses a Kanban board, showing all the work you have outstanding and how much work you currently have in progress, then it’s easy to add the improvement opportunities to your board and use this to help you manage WIP. In any case, decide how much improvement work you can do, and don’t take on more than you can finish. You can read more about Kanban boards in my blog Using Kanban boards to support IT operations.

2. Improve in Short Sprints

If you need an improvement that will take a long time to complete, think about how you could break it down into smaller steps, while making sure that each increment delivers real value. I have seen IT organizations start improvement projects that are intended to replace many tools and processes, but won’t deliver any value for the first 12 months. This is never appropriate. If you make use of some Agile ideas to help you plan, you can always find ways to create value in short sprints. Aim to complete each sprint in less than four weeks, so that everyone can see real improvements and you can keep the continual improvement momentum going. You can read more thoughts on this topic in my blog Major ITSM Improvements Should Start with Small Steps.

3. Focus on Value for End Customers

Once you have identified several possible sprints, and you know your capacity for delivering them, you need to pick a small number to start on. I suggest that you evaluate each possible improvement in terms of how much value it will create for end customers. Since everything you do is ultimately funded by end customers of the business you work for, this will give you the clearest possible indication of which improvements to work on first. Creating better value for the end customers who keep you in business is always going to be important.

4. Look for Zero-Cost Improvements

I have sometimes worked with organizations that tell me they can’t do continual improvement because there is no money, or time, for making improvements. For these organizations, I always recommend that they focus on zero-cost improvements. There are always improvements you can make that have no significant cost, and use very little time. By starting with these zero-cost improvements you can begin to establish a culture of continual improvement, particularly if your zero-cost improvements free up some resources that could be used to start on other improvements (see next tip).

5. Free Up Resources for Future Improvements

Some improvements can result in a long term reduction in the number of people or other resources you need, which can in turn enable you to carry out further improvements. For example, if you start doing problem management, you may identify a few frequently recurring incidents and permanently fix them. This could free up service desk personnel who can be assigned to do some more problem management work. Eventually a ‘virtuous cycle’ like this can result in enormous improvement in customer experience for a very small investment.

Summary

If you’ve been putting off continual improvement because you don’t have enough resources, then maybe you should think again. You can start improving with very little effort and the impact can be enormous. The best time to start is right now! Follow the 5 tips in this blog to ensure you are focusing on the improvements that will create the most value, in the shortest time, for the lowest cost.
Continue reading

How Hot Is Your F11 Key?

Posted by on March 30, 2017 in SysAid
SysAid F11 Hotkey Hey there, long time no blog post! I’ve been working in SysAid for more than two years now, and lately it has come to my attention that many of our customers don’t know about the SysAid Agent F11 hotkey feature. This is one of our blowout, unique features which is also incredibly easy to implement and use so I felt the need to rectify this situation. Time for a quick refresh!

Simplify IT for the End User, Get the Good Life You Deserve

Anyone who works in IT and support knows how difficult it can be to get an end user to properly describe an issue that they’re having. Don’t even get me started on what happens if you ask them to provide a screen capture in a good scenario it will require launching a screen-capturing app, taking a screenshot, saving the file and sending it your way by an email or attaching it to a ticket. Not all end users know how to use, or even have, a screen-capturing app. But even if they do, it only adds more steps where something might get screwed up and delay the resolution time of the issue, i.e. the IT ticket which becomes *your* problem. SysAid simplifies this with its F11 hotkey. When the SysAid Agent is installed on the end user’s machine, all they have to do is hit the F11 key, and SysAid will capture whatever is on their screen, and automatically open SysAid’s Self-Service Portal, in order to submit a ticket with the screen capture already attached! Unless of course, their desktop is as messy as mine, in which case you might have second thoughts about them attaching it to the ticket ;) Cluttered desktop Even better when an end user presses the F11 hotkey, SysAid automatically signs the user into the Self-Service Portal (no need for manual login) and selects their computer as the associated asset in the ticket making your job of identifying their machine that much easier.

F11, F12, F13 (Just Kidding) Choose What You Want

If the F11 key is already in use in your organization by another software, no problem - you can easily customize SysAid to use a different key. It can be done from the beginning when deploying your SysAid Agents, or at any time afterwards by changing the SysAid Agent settings directly from the Asset List. Simply select the relevant assets, click on “Agent Settings” in the action menu and select a different F-key, or a different key altogether (using JavaScript key code e.g. F11 has a value of 122).
Another tip is to place the SysAid desktop shortcut on your taskbar (this is what I did - see the animated GIF below). It works exactly the same as clicking the F11 (or other) function key. You can click it whenever in need, and it will perform the actions described above just as well.

Coming Soon...

Last but not least good things come to those who wait. We plan on announcing, very soon,  a new surprise related to this feature. Keep your eyes peeled! For any questions and feedback, I encourage you to visit the SysAid Community where I’m always happy to help and/or just chat.
Continue reading
Page: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Next
Subscribe
Watch & learn from industry experts about ITSM and help desk hot topics!