Posted by Oded Moshe
on July 18, 2017 in Asset Management
Did you ever see the unfortunate TV weather lady who was interrupted
by the Microsoft Windows 10 upgrade notification?
It was one of those situations that’s funny but also serious at the same time.
It’s a great reminder that, while many of us IT service management
(ITSM) and IT support pros are way too busy fixing “broken” things, the existing IT infrastructure also needs to be maintained and improved. A set of potentially recurring tasks that are particularly relevant when you consider the risks associated with unpatched machines.
Security Now Perches Atop the Patching Tree
The need to patch software is nothing new, whether it be to apply bug fixes, security updates, or even to deliver new functionality. But these days there’s no escaping the growing focus on security and the threat of vulnerability-based breaches, raising the importance of patching from a “good to do” to a regular must-do task for all organizations.
Many successful attacks exploit well-known vulnerabilities, for which patches already exist. So such breaches can be prevented through the effective patch management
of your IT infrastructure. But it’s also vital to remember that the scope of patching now goes beyond the data center and employee desktop to include cloud services and mobile
and IoT devices.
So Patching is Important but Where Do You Start?
Done manually, patch management isn’t an easy task. Depending on the size and complexity of your IT infrastructure, there are potentially hundreds of applicable patches released every month. Thus an ad hoc approach to patching will most likely never work, or at least never work as well as you need it to.
Alternatively, it’s important to take a more educated approach to patching and to leverage automation, instead of relying on manual effort, wherever possible. Consider these five questions:
- Which elements of your IT infrastructure require patches?
- Which patches to do you need to install and which can you ignore?
- In which order do patches need to be installed?
- What is the best, and hopefully easiest, way to install them?
- How well is your patch management process working?
They are the basis of taking a more formal, five-step approach to patch management.
1. Which elements of your IT infrastructure require patches?
This is knowing what’s in your IT infrastructure and on your network (as not all IoT devices will be considered IT assets) – whether through configuration management or asset management
– and having access to a reliable source, or sources, of security issue and patch release information.
You can then see all of the available patches pertinent to your organization but it doesn’t necessarily mean that all of them will need to be applied ASAP.
2. Which patches to do you need to install and which can you ignore?
Sadly, patch management often isn’t as easy as just installing every available patch as it becomes available. Most organizations will take a risk-based approach. Firstly, not all patches are born equal, i.e. some are more important than others. Secondly, there might be dependences between patches that a patch-management tool will need to understand and take care of through the order of its installations. Then, thirdly, patch testing might be required dependent on the criticality of the system affected (and its data) and the overall complexity of the IT environment. It’s no different to the standard approaches to change and release that are designed to protect ongoing business operations.
3. In which order do patches need to be installed?
This is having a formal approach to patch prioritization and scheduling (rather than having a first come, first applied policy). Again it’s similar to the standard approaches to change and release. You’ll need a patching policy and plan that has a minimum of two elements:
- A patch cycle for regular patches and updates
- A plan for dealing with critical, often security-related, patches
Industry alerts and vendor guidance should also be used in determining the criticality of patches and thus the required speed of application.
With SysAid, you can easily create automated policies for various groups of assets and for various types of patches. For example, a policy to automatically patch all desktops with critical security patches, as opposed to servers where you may wish to follow an emergency change
process for critical security patches.
4. What is the best, and hopefully easiest, way to install them?
Very few organizations can afford to rely on manual processes and procedures, fulfilled by hordes of people, these days. Budgets and people templates are tight, and thus an automated approach to patch management is in the best interest of the business. Not only from a cost perspective but also from a governance point of view – as busy people don’t always get around to doing everything they need to do when they need to do it. Thankfully automation never sleeps.
5. How well is your patch management process working?
It’s all well and good ticking off the first four steps above, but your patching process needs a feedback loop. So ensure that there is also the ability to check, or audit, how well things are working, i.e. that everything that should be patched has been patched. For example, you might want to automatically open an incident for the assets that have had one or multiple failed patches of certain types.
There’s also most likely to be a corporate compliance need to be able to track who did what when, the proverbial audit trail. Then, depending on your approach to ITSM, you might also want to take a continual service improvement (CSI)
approach to patch management as you would any other ITSM process.
So patch management doesn’t have to be complicated. Take a logical and organized approach, and let automation do as much of the heavy lifting as possible.
Watch this video below to see how you can use SysAid Patch Management to simplify your task of ensuring that all workstations and servers stay up-to-date with the latest product patches.
Posted by Rafi Rainshtein
on July 11, 2017 in Cloud
IT service management has traditionally been a process-driven discipline, supported by the growing use of automation. In 2017, the increased popularity of cloud services offers up even more opportunities for greater use of automation – let’s call it “ITSM as code.”
In the ITIL Practitioner Guide from AXELOS
, there's rightly a large focus on the soft skills required to manage modern IT services. That’s because we often focus too much on IT service management (ITSM) processes and technology, and then we underestimate how much the people are the real ITSM glue.
While ITIL provides best practice to ITSM
professionals on how to improve service via policies, processes, metrics, and controls, cloud service providers (CSPs) such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) now offer capabilities that in effect enable this ITIL guidance to be programmatically applied to cloud services. Let’s call it “Service Management as Code
In the AWS Well-Architected Framework white paper
, one of the five recommended pillars of the framework is “operational excellence.” This pillar aligns nicely with the ITIL Practitioner guidance
, and all the design principles and best practices contained within it are actionable in code.
And while the above is an AWS paper, its practices are applicable to other CSPs and in this blog I want to outline five ways in which CSPs enable Service Management as Code.
1. API-First Practice
With CSPs, the metrics and controls for all services are available via a programmatic application programming interface (API). Thus, instead of a human using the familiar graphical user interface (GUI) console via a web browser, clicking and typing to control services, the human can now use a command line interface (CLI) to program the web services. This can be contained in a script or program and used repeatedly.
So, if you select any technology today, always ensure that it has a good set of API capabilities such as being able to integrate with other cloud services and to be consumed itself.
Example: the creation of a virtual machine in AWS EC2 can be done via multi-page screens in the browser, or via a one-line script on the command line.
2. Perform Operations with Code
When code is used to translate policies and controls into API commands, this code can be version controlled, access controlled, and it's 100% clear what the interpretation of a policy should be. Code can also be replayed for investigation or test purposes.
As such, configuration management
and responses to operational events are excellent candidates for codifying procedures.
Example: one of the most eye-opening examples of this is the fault-injection approach of Netflix called Chaos Monkey. This is code that programmatically creates a series of failures in production to ensure resilience works. The same programmatic approach can be used to monitor and correct configurations (AWS Config), and now AWS Lambda offers “function-as-a-service” where code runs in response to operation (and other) events.
3. Business Focus to Reduce the Signal-to-Noise Ratio
Align the programmable operations to business objectives, for example reducing the signal-to-noise in metrics.
There are many monitoring services in AWS covering API calls, logging, security access, and more. These should be programmatically and incrementally turned on only if they align to clear business goals. The rule of thumb is: if you don’t know how to action an alert, there shouldn’t be an alert. How do you know that it’s important? It must be aligned to a business outcome.
Example: an important metric with on-line services is the response/wait time for clients – as impatient customers will give up on a website if it's too slow. Using monitoring and response services like AWS Cloudwatch it's possible to monitor across the entire application and identify bottlenecks and slowdowns that impact website response times.
4. Replace Human Checks with an “Automated Trusted Advisor”
Use programmable Trusted Advisor
services to continually employ the most cost-effective resources.
Think of those old-school, manual, consultant-led health checks that used to assess your IT environment against “best practices.” This is now automated in AWS with the Trusted Advisor service – which will check your services against best practices for cost optimization, performance, security, and fault tolerance.
Example: human error is a common cause of system outage. Trusted Advisor checks Identity and Access Management configurations to ensure principles such as least-privilege are in place, reducing the risk and consequences of human error.
5. Automated Configuration and Release
Use programmable and automated configuration tools like CloudFormation
for release and AWS Config
for tracking and remediating change to eliminate “configuration drift.”
Where once upon a time, systems administrators ran scripts to configure servers and application stacks consistently, AWS has taken this one step further with configuration-as-a-service in AWS CloudFormation. It can control many AWS cloud resources, meaning that you can now version control your AWS cloud services just like you do to software.
In ITSM terms, services like CloudFormation allow you to programmatically define a business service as a collection of integrated cloud services that can be repeatedly and reliably reproduced in testing or investigation scenarios. And it can be driven by all the familiar enterprise configuration and release tools such as: Powershell DSC, Chef Server, Puppet, Ansible Tower, Red Hat OpenShift, Docker Datacentre, Spinnaker, etc.
So, the choice is yours. You could manage cloud services like on-premise services – via humans interpreting documented procedures and clicking and typing into many graphical user interfaces. But you shouldn’t, unless you are happy with insufficient speed and the risk of adverse service impact.
Cloud services are programmatic and, as such, you can – and should – use code, scripts, and cloud services to codify your ITSM practices.
Posted by Stuart Rance
on July 6, 2017 in ITSM
If you work in an IT organization, you may be tired of being told that you should focus on creating value for customers. You already know that you should be delivering end-to-end services focused on customer needs, not just managing technology, but what does this actually mean in practice? Isn’t managing the servers, the network, the storage, and the applications enough? Somebody has to and isn’t that how IT creates value?
I’ve worked with a number of IT organizations who asked for help understanding how they could move from a technology-focus to a business-focus, and there isn’t a simple solution that works for everyone, but here are some ideas that have helped some of my customers.
A Customer Example
I once worked with a financial organization that offered more than 800 different IT services. Seriously. Each service was based on an application that the business used, and the service level agreement (SLA) for each application specified lots of measureable targets. There was regular monthly reporting, showing how well the IT department had delivered to all of these SLAs. As you can imagine, the sheer number of SLAs, and the amount of data they generated, meant that the reports were overwhelming. Everyone could see that they needed to make the SLAs, and the reporting, more business focused, but they were grasping at straws on trying to figure out how.
The best thing about this particular IT department was that they had some really good business relationship managers (BRMs) who clearly understood what their customers did and what was important to them. So this made the first step easy. We created a list of business units, and then picked one to focus on, for our initial efforts. The mortgage department was the lucky winner. We identified all the major business processes for the mortgage department – for example, there were processes to manage “Third Party Mortgages,” “Online Mortgages,” and “Branch Mortgages” – and then we mapped these processes to the applications, to see which applications were critical for which process.
It quickly became obvious that each business process needed a different mix of applications. Some applications contributed to all of the processes, but some were only used for a very specific purpose. For example, the application that calculated how much commission to pay to a third party was only used by Third Party Mortgages, but the application that determined monthly payments was used by all the different mortgage types.
At this point we knew we had a solution to the problem of becoming less focused on technology processes and more focused on business ones. We needed to create services based on the customer’s business processes, rather than the IT applications. So the next thing to do was to define a small number of services based on these business processes. These new services had names like “Branch mortgage support service” and “Online mortgage support service”. You really couldn’t be more business focused than that. The customer fully understood what they needed from each of these services, and writing SLAs for them was fairly easy – because we just needed to write down what the customer wanted.
We agreed to keep the new SLAs short, just focusing on the really important things that the customer cared about.
I knew we had got this right when one of the BRMs said to me:
“My customer has always said that the most important target for them was the time to onboard a new mortgage agent, and I’ve never been able to include this in an SLA before, because none of the applications delivers that
We didn’t throw away the old application-focused SLAs, as these identified some really important technical goals for the applications. Instead, we turned these into operational level agreements (OLAs), which are internal agreements within the IT department. We aligned these OLAs with the SLAs to help make sure we could deliver what the customers wanted. We kept measuring the metrics from these OLAs, but these were only used internally within IT, whereas the new customer reporting was completely focused on the new SLAs.
The Importance of Business Relationship Managers
If you’ve followed my story thus far, then you’ll understand how important the existence of really effective BRMs was to the work we did together. If you’re a BRM yourself, or you work on a service desk, or you’re an IT director, then you probably know well who your customers are and have a good grasp of their concerns. Most likely, you talk to them daily or thereabouts, and you’re acutely aware of the impact your actions have on their success.
Many people in IT are not so lucky. People who work in server support teams, or configuring monitoring tools, or managing network performance, are often given technical requirements and work on these without ever having sight of a real customer. One organization I worked with had “service owners” who were so focused on the technical aspects of delivering services that many of them were completely unaware of who the customers were. More importantly, they really didn’t know what the customers’ concerns were – they simply delivered technical solutions to technical challenges. This really isn’t going to work. If you want to deliver customer-focused services that really do create value, then it’s important that everyone in the organization knows who their customers are, and works to meet the needs of those customers.
What Can You Do in Practice?
So if you currently run technology focused services, with staff who rarely consider the needs of end customers, what changes can you make that would transform you into a customer-focused organization? How can you make sure that everyone knows who their customers are, and what impact they have on the success of those customers?
It might be much easier than you think!
The most effective way I know is by running experiential training. Get your staff into a room and talk about what services they support, who their customers are, and how the things they do create value for those customers. You may need to include some senior managers or BRMs to make sure somebody really does understand the end-to-end value chain. Give them some small group exercises to help them learn from each other. If you can, get them to participate in an ITSM simulation where they can take part in the whole end-to-end service experience for themselves (many training organizations can run these for you), and most importantly, get some customers into the room to talk to them about what they need from IT. One of the greatest pleasures I had as a trainer last year was when one student said to me “You know I was skeptical at first, but I really DO have customers, don’t I?”
Moving from a technology to a business focus is certainly challenging. As I said earlier, there’s no one-size fits-all solution. You need to think about what’s going to work for your specific circumstances, your organization, and your customers. But I strongly recommend that you use the new ITIL practitioner guiding principles
to help you; you’ll find the support they offer invaluable.
I’ll leave you with this question. Are you delivering business-focused services to your customers, or are you just providing technology?
(This blog was originally published as a podcast by Stuart Rance, as part of SysAid’s "Back to ITSM Basics" program.)
Posted by Sarah Lahav
on June 27, 2017 in ITSM
There are some rumors that IT service management (ITSM)
No way – I’m here to tell you that ITSM is far from dead.
In fact, I think this is a great time for ITSM, and a great time to be an ITSM professional.
With technology such an integral part of every bit of a business, the line between business
is blurred, perhaps even non-existent. ITSM is no longer a “nice to have,” or something that only the service desk does. ITSM is the means by which IT delivers business capability. ITSM is the enabler for realizing real business value from the use of technology.
But some ITSM implementations have fallen short.
Where ITSM Has Fallen Short
ITSM was always intended to answer the following three questions:
- How do components work together to enable services?
- How do services provide business capabilities?
- How can a business best leverage its IT capabilities?
Unfortunately, in many cases, ITSM is implemented only to address the “squeaky wheels,” meaning such ITSM implementations are only operations-focused. Of course, these ITSM rollouts deliver incident management, change management, service desk, and maybe request fulfillment – all important processes to have. But then they stop at that. Services are not defined, design and strategy activities are not formalized, and continual improvement is an afterthought. The rationale for stopping is that the “squeaky wheels” in IT operations were addressed.
Some ITSM implementations fall short because they are more about processes, less about services. In the rush to implement ITSM, many organizations focus on designing and implementing processes. The notion of a “service” – the value and outcomes delivered to the business – often become secondary, or in many cases ignored.
A third area where ITSM implementations often fall short is in the area of business-IT alignment. The concept of aligning what IT does to what the business needs clearly makes a lot of sense – but it has a fatal flaw. Business-IT alignment depends upon the business inviting IT to participate in the development of business strategy. “The business” often goes along its way and only includes IT when there’s a technology need, not considering current IT capabilities and resources; or even worse, when a technology-based solution is identified – without having IT involved in identifying that solution.
The fact of the matter is that business-IT alignment
is not, and never was, the way to look at ITSM. The term ‘alignment’ implies that IT is not part of the business it serves. Business and IT convergence – the recognition that business and IT must work seamlessly – is the lens through which ITSM should be viewed.
The New Reality of ITSM
ITSM is going through an evolution. When ITSM first became popular in the late 1980s, it provided an orderly, linear approach to managing IT. But modern ITSM is reacting to a new reality. What does the new reality of ITSM look like?
||ITSM now is….
|Business and IT alignment
||Business and IT convergence
|Managing all things IT-related under one roof
||Ecosystems of components, partners, and suppliers found both internally and externally
||Business value and outcome focused
|IT project oriented
||Business initiatives enabled by IT
|One tool in the toolbox
||A robust set of capabilities and tools
Though ITSM continues to be a “people-process-technology
” approach for managing services, it is now a mix of frameworks and methodologies. While ITIL®
continues to be the defacto standard for ITSM, other frameworks and methodologies are providing ITSM professionals with tools or capabilities to do the job that IT organizations are being asked to do.
helps us visualize work and understand where our processes may be wasteful. DevOps
is delivering some of the “how” for deploying releases and reacting to changing business needs with greater nimbleness and effectiveness.
Technology, however, has made significant leaps in the past few years to help with process execution. Technologies such as the Internet of Things (IoT), robotic process automation, machine learning, and others provide the capability to realize the promises of process design like never before.
For example, using the SysAid IT Benchmark tool, we did an analysis of over 86 million service requests records from more than 10,000 IT departments in 140 countries over a six-year period. For argument’s sake, let’s assume that to process these requests manually required an hour each. We found that 17% - or nearly 15 million of those requests – could have been prevented by leveraging machine learning. What innovations could those IT departments have delivered using those 15 million hours?
You can read more about that in Oded Moshe
’s article: The potential for machine learning in ITSM
Why Is ITSM More Important than Ever?
Here are three reasons why I think ITSM is now more important than ever.
1. Business Value Contribution by IT
As technology has become more commercialized and consumerized, good ITSM can help tell the compelling story of why a business should get its technology from its IT organization. The IT value chain, enabled and supported by good ITSM, becomes a seamless fit into the business value chain.
2. Digital Transformation
It is not a question of “if” digital transformation
– it is a question of “when.” Many businesses are contemplating what digital transformation means to them. A core requirement for a business to begin digital transformation is to first have absolute clarity on its services and processes. Good ITSM provides that clarity regarding services and processes.
3. Business Agility and Responsiveness
Good ITSM supports business agility and responsiveness by promoting standardization in the form of models. And if something can be modeled – like standard requests, or password resets – it can be automated, which provides the ultimate level of responsiveness to a business.
Lead the Evolution!
So, I think now is a great time for ITSM and it’s a great time to be an ITSM professional. Technology has caught up with, and can now enable, many ITSM concepts.
Good ITSM enables the agility and responsiveness demanded by today’s business.
Good ITSM cements the convergence of business and IT by enabling business capability through the effective use of technology.
Good ITSM professionals are uniquely positioned to lead the evolution!
Posted by Rafi Rainshtein
on June 20, 2017 in Cloud
The world has been living with on-premise software for far too long now. The benefits that businesses and IT teams alike are gaining from well-managed, cloud-based services are rapidly changing the face of IT. We’re also becoming far more confident about how the costs of migrating IT service management (ITSM) software to the cloud
can be quickly recouped. This cost benefit is elevated even more so when IT departments are able to calculate the true cost of on-premise software. From the cost of physical servers, to the time spent running weekend upgrades... the list of hidden costs to the business is endless.
To help you get thinking about how these costs appear in your own IT environment, here is our arm’s length list of hidden on-premise costs:
Let’s start with a few easy ones…
So your software has to live on a physical server, which you need to buy, keep somewhere, run electricity to, and keep cool. You might even need to give it a polish from time to time.
Your data is only as good as your last backup. The costs attached to managing your own backups, storing data off site, and checking yesterday’s backups for any errors is an additional expense often not accounted for.
3. Anti-Virus Software
Keeping your anti-virus up-to-date is obviously important on all your software and servers. However, the more software you have, the more instances of anti-virus you need.
Oh the dreaded upgrades! Managing your own upgrades is one of the most time-consuming activities for IT staff. In addition to the planning, scheduling, and late nights, you also have to account for all the times it fails or breaks and doubles the workload.
Running patches is much like performing an upgrade. However, instead of doing one a month, you have to do about 25! And because they are mostly ad-hoc, managers rarely account for the time you spend doing them.
Planned or not, downtime caused by on-premise software is bad. The business always notices outages and clings on to the memories they create for way longer than you’d like them to. It also really drags down your SLAs and end-of-month metrics reports.
Okay, now here’s a few you perhaps hadn’t thought of…
How often have you calculated the amount of time you actually spent testing a new patch, release, or upgrade? Not very often, I’m sure! In reality, testing changes to internal hosted software is just something the sysadmin stays up all night doing, but never complains about!
8. When it Breaks
Many IT teams think software ‘breaking’ is just a part of normal life; you log the incident, fix the bug, and bring it back to life. However, the truth is once you’ve got rid of on-premise software, all that time you have to spend fixing it, magically disappears too.
On-premise software has SO MANY more integration parameters to consider. Tiny configuration changes on servers can totally screw with even the most basic Active Directory integration. Standardized cloud-based software is designed to integrate far more easily and tends to speed up the process significantly.
10. Risk Management
Cloud used to be looked at as a higher risk, but this is really old-fashioned thinking. Far more security breaches and data losses take place on internally hosted and on-premise servers these days. This is mostly due to the fact that IT teams are struggling more and more to keep up with the growing number of potential threats and risks.
Rarely considered until it’s too late, the ability to scale services can either cost or save a business huge amounts of time and money in the long run. Most on-premise installations are designed and built with the ‘now’ in mind, making scaling up later on very tricky indeed.
Every IT department has that server. The one that sends you an event notification every bloody Monday morning telling you it’s running out of hard disk space! Moving away from on-premise software not only solves this problem, but reduces your overall storage spend.
So here are a few boring ones, but the accountants will like them!
YAWN… did someone say software licensing? We know software licensing is probably the least sexy part of our jobs, but it’s also a very expensive thing to mess up! When dealing with on-premise, ensuring you’re compliant with software, databases, and virtual machines is a difficult and risky job, which often goes wrong.
14. Over Licensing
Another licensing cost worth mentioning (because loads of people do it) is owning more licenses than you use. Often described as shelfware
, over licensing on-premise software now accounts for around 20% (according to Gartner
) of the average business’ software licensing spend!
15. Unpredictable Budgets
Most of the points above and below are pretty hard to forecast. The estimated cost of running on-premise software is almost always under-estimated and frequently makes for bad news at the end of the financial year.
Every year the pressure for IT to keep track of its assets grows. From the number of desktops and servers, to the software and licenses you own, as businesses get bigger so does the need to be totally compliant. On-premise software just adds more and more unwanted weight and costs to the asset management and auditing process.
17. Overtime and Emergency Costs
Ever waited in the office till 3am for an urgent server part to be delivered? I have, and it sucks! Not only does this ruin your evening, but the costs attached to either compensating you for your time, calling out specialist engineers or raising high priority incidents with suppliers always comes at a high price.
Don’t worry, this list gets a bit better near the end!
18. Weekends and Evenings
Ditching on-premise software means you no longer have to spend weekends and evenings in the office or sitting in front of the computer running updates, testing, patching, and so on. Just get your software up in the cloud and spend some more time with your friends and family!
19. Continual Service Improvement
Remember that ITIL® book you have on the shelf? There is actually some really good stuff in there about improving your IT services
. Many IT teams struggle to have the time to focus on learning how to approach improvement, due to wasting time managing and fixing on-premise software.
20. Relationship Building
A huge focus for many IT teams nowadays is building better relationships with the business. Whether that be with peers in HR and finance, or all the way down to end customers. But frankly, IT teams who are weighted down with too much on-premise and legacy software, simply don’t have the time to do this.
21. It Really Does Take Up 30–40% of Your Costs
We’ve been working the numbers, talking to customers and testing it out and we’ve found that in most organizations, the cost of keeping on-premise software adds around 30–
40% to your overall IT costs vs. moving those services to the cloud!
At SysAid, we’ve invested a great deal into uncovering the difficulties brought about by keeping your ITSM software on-premise. As a result, we’ve developed a set of great cloud-based services and migration tools. We want to make moving from on-premise to the cloud
as easy and cost effective as possible.
If you’d like to discuss migration options with our team, just get in touch today
and we’ll talk you through everything from backing up your current tool set, to going live with your new cloud-based ITSM solution.
Find out more about SysAid in the Cloud.
Posted by Stuart Rance
on June 13, 2017 in Service Desk
When you work on a service desk, calls from angry users can be very hard work, not least because of the way we’re likely to feel about them. Being at the other end of an angry phone call or email, is never pleasant, and being confronted by an angry user can be very trying. What’s worse is that while managing the call is our responsibility, resolving the issues may not be. So what can you do to help a customer when they are so angry? How can you get past the anger so you can help resolve their problem? And how can you manage your own feelings?
In my years in the IT industry, I have had to deal with many angry users, and customers. Here are some of the things that I have found effective.
1. Accept that the User Is Entitled to Their Anger
The first thing you need to do is to accept that the caller is entitled to their anger. Even if you KNOW that the user is wrong, they are unlikely to be angry for no reason. So, listen to what they have to say, and try to understand why they feel angry. If you can work out what it is about the situation that has resulted in a furious user, rather than one just asking for help, you will have a much better idea of what you can do to help. For example, the user may be angry because a minor IT failure has resulted in a major business impact. If you treat this as a minor issue, then it’s not surprising that they get angry.
2. Don’t Get Defensive
This is, of course, much more easily said than done. It is very easy to get defensive, and respond angrily. But if you try to justify the situation, or to tell the caller why they are wrong, the caller is likely to get even angrier, and it will take even longer to achieve the calm you need to deal with the problem.
What you need to remember is not to take the anger personally. You are the target of this anger because you are a representative of the company. The caller is angry with the company, not with you.
Something that I have found useful in these circumstances is to remember a time when I have felt angry with a service organization. I think about what reaction I wanted from the person on the other end of the phone. I try hard to empathize with the angry user, it really could have been me on some other occasion.
3. Listen Actively
The first two points I have made are about managing your own emotional responses. But what do you actually do in this situation?
In my experience, the most important thing to do is to listen and to demonstrate that you are genuinely trying to understand. Listen patiently and give the user enough time to say what they need to. Don’t try and cut them short, but allow them to express their feelings. Don’t try to justify the situation, or to tell the user why they are wrong, simply listen to what they have to say, and listen to the emotions and feelings as well as the words. Show that you understand how they feel, as well as the technical details of what has happened. One way to do this is to repeat back what you’ve heard, and ask for confirmation that you have understood correctly – not just about the technical situation, but also about the anger and the reasons for it. Remember that it will help if you use the caller’s name when you talk to them, since people tend to respond much better when addressed by name.
Another way to demonstrate your understanding is to offer sympathy, and an apology, even if you don’t think you’re really in the wrong. BUT be careful. Make sure you’re sincere in your apology, since insincerity is likely to be recognized and resented! I know this sounds contradictory. Surely, it’s impossible to apologize sincerely if you don’t think you are in the wrong? However, if you accept that the user really is entitled to their anger,
then you should find that you can offer a sincere apology.
Let’s look at an example. Suppose a user is complaining because they have had to wait 2 hours for you to call them, but the SLA says you don’t have to call back for 4 hours. Of course, they are technically in the wrong. Nevertheless, by listening actively you will have gained a real understanding of the impact this has had on them, and this should enable you to empathize and to offer a genuine apology. It won’t hurt you to say “I am very sorry that we didn’t respond any sooner. I understand the impact this has had on you. Let me try to help put things right here.”
One piece of advice that I have found helpful is to smile while you are talking to the user. You may not notice the difference, but if you smile then it changes your voice, and the user will respond better to what you have to say.
4. Own the Issue
When you take a call from an angry customer, take personal ownership of the issue, even if you would normally pass calls on to another team. Try to ensure that the customer’s issue is handled properly from this point on. If necessary, escalate the situation to your management to get permission to deviate from normal procedures so that you can facilitate this. If you can provide excellent service from this point on, you have a chance not just to salvage the organization’s relationship with the customer but to actually improve it.
Tell the user that you are going to own the issue, and then make sure that you really do own it. It often helps if you ask the user what they would like to happen next. If possible manage the issue the way the customer wants. When that’s not possible, it’s best to be honest and straightforward. Tell the customer what the limitations are now, before they go away with an expectation that you can’t meet. I once had a problem with an organization that delivered a faulty product. I phoned to complain and the person who answered promised to get back to me within 24 hours with a resolution. When they took ownership of the problem, I felt much better. But although I waited patiently for them to call me back, the call didn’t happen! When I phoned them back two days later I was very angry.
As you progress the issue, you must make sure the customer is regularly updated. Talk to them about how often you will update them, and how you will do this, and then make sure you do what you have agreed.
5. Follow Up
After the issue has been resolved you should contact the user again. This is your opportunity to start building a new, healthier, relationship with that user. Make sure they are happy with the resolution, and show them that you care about the service you deliver, and about how they feel.
However good our services are, there are always going to be some users who aren’t satisfied. And sometimes they will get angry with us. One of the things that distinguishes a great service organization is how they manage that situation. You can make it worse by telling the user that they’re wrong, and they’re not entitled to feel angry, or you can acknowledge that they are entitled to their anger and do what it takes to resolve the situation.
As always, please let me know when you try the ideas in my blog and share how well they worked for you.
Posted by Oren Zipori
on June 6, 2017 in Asset Management
In the first part
of this blog series I provided my first four tips for creating a configuration management plan. These related to getting a common understanding of the basics, setting the right scope, agreeing a naming convention, and knowing more about your IT estate. This blog offers four more tips, taking it to eight in all.
Please read on to learn more about how best to get started with configuration management
through effective planning.
Tip 5: Watch the Edges
One of the most closely aligned IT service management (ITSM)
process to configuration management is the change management process. Therefore, your configuration management plan needs to cover how change will interface with configuration and at what point in the change lifecycle the configuration management process will need to be called upon.
Referencing your change management process in your configuration plan means that there’s appropriate support in place to ensure that when a configuration item (CI) is updated, the configuration management database (CMDB)
or configuration management system (CMS) is also updated, such that what you have in your CMDB or CMS matches exactly what you have in your production environment.
Nothing will make your configuration management capability fail quicker than your CMDB or CMS having incorrect or out-of-date information. Thus, control is a critical aspect of configuration management.
Also, work closely with change management personnel to ensure your processes are in sync. For example, you could put a process step in place where a change can only be closed off as successful when the CMDB or CMS is updated. Something else to consider is putting change restrictions or freezes in place during key configuration management process points such as baselining or audit exercises so that you have stability during these critical periods.
Tip 6: Remember Your Lifecycle
Having a section on status accounting in your configuration management plan ensures that the lifecycle stage of each CI is captured accurately. (You can read more about that in this great blog: ITSM Basics: How to Do Configuration Management
Some example configuration management statuses include:
- In test
- In pre-production
- In production
- Out for repair
- In disaster recovery (DR) environment
- Disposed of
Status accounting ensures that all CIs that make up the service baseline, or snapshot, have been captured and that all changes have been captured by change management and are correctly reflected in the CMDB or CMS.
Tip 7: Verify and Audit
For your configuration management plan to be truly effective, you need to have a section on how you will verify the accuracy of your data as well as how to respond in an audit situation. Verification includes routine checks that are part of other processes – for example, verifying the serial number of a desktop PC when an end user logs an incident, or checking that the version of software updated in a planned change has been added to the CMDB or CMS. Also, make sure that you detail who will be doing the checks, and how often, in your plan.
When defining an audit schedule, in the plan, look to the rest of the business for guidance. Do you have any regulatory requirements such as SOX
or BASEL 3
, or any standards such as ISO/IEC 20000
that need to be adhered to? If so, these will probably come with a defined audit cycle.
Also, add in a schedule for internal audits. Because, when preparing for external audits, the best thing you can do is to run an internal audit first so that you can correct any potential issues, or at least come up with a plan to improve in the case of any major findings, beforehand.
Tip 8: Don’t Forget Your Reference Section
The configuration management plan should include a reference section too – that details where information has been sourced from.
When you’re creating a CMDB or CMS you’ll be talking to third-parties such as support teams, service architects, and project managers – tag these teams or roles as references relative to the information they provide. Plus, you’ll need to capture the non-human sources of information such as a service catalog, support documentation, vendor contracts, or service level agreement (SLAs) so that it can be verified as necessary before it’s placed into your CMDB or CMS.
So, that’s my eight tips for what to include in a configuration management plan complete. What else do you have in your configuration plan? What would you add to my tips?
Posted by Oren Zipori
on May 30, 2017 in Asset Management
We’ve all probably heard, and even used, the phrase “fail to plan, plan to fail” but never has this been more pertinent than when planning for a corporate IT configuration management capability – from strategizing, through selling the investment in it, to using it to make a difference to IT and business operations.
Ideally, configuration management
is one of those ITIL and IT service management (ITSM)
processes that should sell itself, but the reality is that it’s much harder to get buy-in for configuration management than say change management. Then, once an initiative has been started, configuration management is an ITSM capability that can quickly spiral out of control, consequently failing to deliver on the promised positive business outcomes, and thus it needs very careful planning in order to be effective.
Don’t worry though, this blog contains the first four, of a total eight, top tips for creating an effective configuration management plan, one that you’ll have no trouble “selling.”
Tip 1: Start With the Basics
Start by explaining that ITIL’s service asset and configuration management (SACM) is the process (or capability) responsible for ensuring that the assets required to deliver IT services are properly controlled, and that accurate and reliable information about those assets is available when and where it is needed. This information includes details of how the assets have been configured and the relationships between assets.
It's also a case of setting the scope and boundaries for configuration management – what it is and isn’t, and what it will and won’t do. Thus, the plan’s introduction (or a referenced appendix) should include an overview of how configuration management will work in your organization, along with objectives and key outcomes.
This will likely include:
- Roles and responsibilities
- The interfaces into other ITSM and business processes
- The technology employed, and how it works with other IT management and business systems
- The reasons why configuration management is needed
- How configuration management will change things for the better
- Some anticipated outcome statements such as fewer change delays or issues, quicker incident resolution, or more efficient capacity planning
Tip 2: Set Out Your Scope
One of the primary reasons configuration management initiatives fail is because people try to do too much too soon. So setting the right scope is a key part of your plan. When planning for configuration management, there are two key approaches:
- Broad and shallow – knowing a little about everything
- Thin and deep – know a lot about a limited number of things, often the most business-critical or problematic IT services.
The latter approach is usually the one taken, as it provides maximum value early on.
Then there are three basic layers to consider:
- Configuration Items (CIs)
The inventory (management) layer takes care of consumables: keyboards, mice, CD drives (if still used), power cables, USB sticks, security peripherals, etc. It’s all about equipment that needs to be tracked for monetary value and to make sure that it’s in stock…but that’s about it.
The next layer is assets or asset management. Example assets include: PCs, laptops, and printers. This is still the stuff that we need to keep track of for monetary reasons and to make sure they’re in stock when needed, but we also need to know locational information and how they are supported.
The final layer is (configuration) items under the control of configuration management, which are often the important items that make up your critical IT and business services. Servers, network devices, and business applications are all CIs that should be under the control of configuration management to ensure that they are managed, supported, and subject to the appropriate change control.
Tip 3: Agree on a Suitable Naming Convention
The plan should explain any proposed, or agreed, naming conventions or nomenclature. Confused? Every CI or item that’s under the control of configuration management should have a unique name or identifier. When I’m tasked with implementing a configuration management capability from scratch, I try to use a naming model that will make it easy for the CI to be identified. So I tend to use something like the following:
Type of Service – Location – Level of Complexity
So a WinTel server based in New York requiring third-line support would be WinTel1234-NY-L3 and a business application located in Dublin requiring first-line support would be BusApp1234-DUB-L1, and so on. In terms of naming conventions, it doesn’t matter what logic you use, as long as everything has a unique identifier.
Tip 4: Know Your IT Estate
A key part of your plan is the baselining process, taking a snapshot of the critical services and their key dependencies so that we know exactly what makes up the IT or business service. The purpose of a baseline is to take a measurable part of the service so that it can be added to a CMDB
(configuration management database) or CMS (configuration management system).
Since it’s a snapshot of the state of a service at any given point in time, it can be used as part of change management activities. Because, if the service has changed, there will need to be a valid, authorized change request against it, as well as being a stable reference for future planned change works.
However, don’t fall into the trap of trying to capture too much data at first – you can always build things up later. If you try to go into too much detail when starting out, you might run out of time and money before achieving any communicable success. Also bear in mind that the more detail you capture, the more work it will be to maintain it.
Here are some useful CI attributes to capture as a starting point:
So that’s my first four configuration management planning tips. Want more tips? Please come back soon for the second part of this blog.
- Unique Identifier
- CI type, for example: server, network device, software package
- Version number
- Support details
- Vendor details
- License details
- Purchase date
- Warranty details
- Relationships to other CIs
Posted by Rafi Rainshtein
on May 23, 2017 in Cloud
Don’t be fooled – by the cloud tech-talk of instances, databases, code, and APIs – into thinking that cloud is all about technology. Unfortunately, this kind of tech-talk can convince IT service management (ITSM) professionals into thinking that cloud is just another technology evolution with zero impact on ITSM
. However, those that realize cloud is all about business, applications, service, and operations will understand the impact across the ITIL ITSM best practice framework – in particular, the impact on capacity management
Cloud requires a different capacity management process to that used for traditional, on-premise IT services. The key changes are in the following five approaches, all of which make capacity management more granular, and move from the long-range “vague” to the short-range “specific”:
- Forecast horizon is shorter
- Speed of change is faster
- Blend of resources is finer
- Automated changes in response
- Balance of CapEx and OpEx
An ITSM professional that understands these five cloud capacity management approaches will be a huge asset to any organization, measured in terms of the business bottom line as well as service quality.
1. Forecast Horizon is Shorter
Buying the full IT stack for on-premise IT service delivery is a long, difficult, complex, and expensive process. Want to know how long?
It takes months, maybe nine-to-twelve months is standard, to design, procure, and deploy any reasonably complex system on-premise. Once procured, it has a lifetime of three, five, or seven years. Maybe longer. This is the long, long, length of the on-premise capacity management horizon.
Over that time, capacity is over-provisioned for peak workloads and this over-provisioning burns money. One might as well be throwing dollar bills out of the window. But in the traditional IT operations spirit of “I only get fired for outages,” capacity management thinking prefers to avoid under-provisioning that can hurt customers and therefore the business.
A capacity manager doesn’t have this many-year-horizon with cloud services. The capacity manager now only needs to forecast ahead as far as the time it takes to add more capacity to the cloud services and that is, on average, around 15 minutes
from decision to deploy, including the time to make a coffee and get comfy in front of the console.
Cloud capacity managers additionally do longer predictions to save money by purchasing reserved cloud capacity, sometimes saving over 60% in costs. So capacity management still has a role to play in longer-forecast planning but it’s now about financial efficiency, not the avoidance of disaster.
2. Speed of Change is Faster
As if predicting capacity changes wasn’t hard enough, responding to them is difficult in non-cloud systems.
Capacity managers cannot quickly respond to unplanned changes in demand if it takes months to procure and deploy capacity on-premise. The brand is then damaged and customers leave if the IT service is down or the business can’t adequately process transactions during highly-visible seasonal fluctuations such as summertime or Christmas (when, unfortunately, many staff are off work).
Cloud components can be scaled quickly and even large amounts can be done in a few hours (10,000 VMs anyone?) with some extra communication with the cloud service provider. Plus, the business can scale down quickly too and turn off all of that excess capacity when the seasonal fluctuation subsides. This can’t be done on-premise, it can only be done in the cloud.
3. Blend of Resources is Finer
On-premise systems might be measured by the number and size of datacenters, comms rooms, and racks. Adding a server might mean adding another rack. That might mean adding another switch, and another rack. Which then might mean extending the closet or room, or even the datacenter.
To avoid hitting these capacity potholes, long-range capacity management forecasting is done to provide more capacity well ahead of the predicted demand. This is a standard enterprise “best practice” approach that’s wasteful and expensive.
In the cloud, it’s possible to keep on adding VMs without worrying about any physical infrastructure or other capacity limits – and so now the granularity of capacity is one virtual machine.
If it’s possible to use higher-order cloud services such as AWS S3 storage, then operations are further removed from storage capacity considerations as these are so scalable a normal enterprise will never hit the limits – and no capacity management is required in the traditional sense. Capacity management now moves to the question “How efficient are we being with our used capacity, can we save money?”
4. Automated Changes in Response
Responding to expected and unexpected demand causes much stress for a capacity manager. For instance, in a typical fixed-size, on-premise IT system there are physical limits to the processing capacity.
The normal behavior when capacity demand exceeds current supply is to push out or de-prioritize non-production workloads – something has to give. But what if getting the new product live is also business critical, and that’s what the non-production workloads are doing? Is the unplanned production capacity demand now delaying an important product release, promised to customers already through advertising and other communications?
In the cloud, this is handled differently. Capacity managers can use automated systems such as AWS EC2 Auto Scaling
to manually, by schedule, or dynamically add capacity, such as more compute or more load balancers. The only upper limit to capacity supply is how much the business can afford to spend.
5. Balance of OpEx and CapEx
Pay-as-you-go (PAYG) is one of the five essential cloud characteristics
. This consumption-focused purchasing method means that you can align operational expenditure to business need via only consuming the cloud services you need. The alternative approach with on-premise is purchasing hardware and software, and owning (and managing) these assets for a three, five, or seven-year period.
Some organizations have budget arrangements to annually plan spend against capital expenditure. This can also be done with the cloud with mix-and-match reserved capacity (annual) and PAYG (on demand). This allows capacity managers to cater for mostly-steady but occasionally-“bursty” workloads.
The other demonstration of mixing OpEx with CapEx is in the so-called Hybrid Cloud
model – mix the CapEx-laden on-premise systems with OpEx-savvy public cloud – handling the steady-state workloads on-premise; and the fluctuations in the public cloud. If
you can achieve this technically, architecturally, and operationally that is.
Capacity management is still important, but different, when it comes to cloud. The old constraints are different and a modern capacity manager is now constrained only by budget (and its efficient use) and a workload’s ability to exploit cloud architecture for auto scaling.
One of my biggest fears as an IT manager is coming to work one day and finding out that my company network was hit by a ransomware attack. So you can imagine my reaction when I read the world news on Friday (May 12th) regarding the wide ransom attacks that were taking place and affecting very large institutions, places where you’d think they were completely covered with their security standards. As it turned out, this was happening worldwide but the big hit was felt mostly in Europe.
Posted by Oren Zipori
on May 16, 2017 in General IT
Ransom Run by the Rotten Mafia
By now we all know the name of that ransom attack is “WannaCry” (official name Ransom:Win32/WannaCry) and like all other ransomware attacks, it encrypts files on an affected computer as well as any other network files that are available for that computer.
After the encryption, the hackers of this ransomware leave a text message or some kind of a note notifying the user that their files have been encrypted and the only way they can get their data decrypted is to pay cash, and then the “nice” hackers will send over a key to unlock all their files. It’s literally a ransom situation by the mafia in cyberworld!
After reading a bit on the WannaCry ransomware I understood that the best way to protect ourselves from such an attack is to deploy the Microsoft Security Bulletin MS17-010 fix, which was released in March 2017. Yup, not that long ago….hence many organizations and individuals had not done so before the insane cyber attack over the weekend.
Can You Say Patch Management?
As the IT Manager at SysAid, I’m using all the SysAid tools (but of course!) to manage my IT services and support, and that includes SysAid Patch Management, which helps to keep our Windows-based servers and PCs always up-to-date with the latest security patches/updates.
This means that all of my users’ computers were being patched on a regular basis and that the necessary security fix was already deployed. Thank goodness!
To feel completely secure, as an added precaution, I logged in to my SysAid Cloud console so I can asses the deployment of the MS security fix across my network and with a simple report I was able to see which computers had the security fix installed and where it was missing (there are various reasons why a mass patch deployment can fail). Then, with this data from my report, I was able to directly attend to those computers and make sure that the security fix was properly installed on them.
To make a long story short, as with all IT-related security issues, one of the most important things is a fast response!
Finally (a shameless plug), I’d love to share with you this entertaining video that my marketing colleagues put together so you can understand how SysAid Patch Management works: