2016: Is this the Year to “Go Digital?”

When an industrial giant like GE starts to move all your health data into the cloud using their new Predix cloud offering, you sit up and take notice. It’s clear that “digital” has gone well beyond Facebook and Google+. What does this mean for the enterprise?

The consumerization of IT means many of us have been acting and interacting in more digitally sophisticated ways at home and in our social circles than we do at work. In fact, in some cases, enterprise computing has lagged behind personal computing when it comes to the digital revolution. No doubt the scale of any decision to “go digital” is fed by considerable investment apprehension. Many large enterprises believe they can’t possibly act quickly enough to keep up with the steady march of new innovations emerging in the marketplace. This hesitation often translates into lots of talk about technology with little action. Especially when decisions about technology involve a radical change to a company’s way of doing business.

Instead of asking “What are digital technologies?” and “What does digital transformation mean?,” enterprises need to be asking “How can we use advances in technology to create sustainable and market-disrupting value?” Making sense of the dizzying rate of technological change is a matter of looking at it through your own, familiar and trusted business perspective.

In a new white paper, Avoiding the Siren Song of Technology: Focusing your Digital Strategy on Business Outcomes, I explore the ways leading enterprises are taking advantage of emerging technologies and as-a-service solutions to build a “digital fabric” to connect with and influence their customers, employees, partners and providers. By building a digital fabric, organizations can create new digital value in four distinct areas:

  • Digital customer experience
  • Digital products and services
  • Digital supply chain and manufacturing
  • Digital enablement and productivity

Enterprises should only invest in the opportunities that are right for them and on which they can capitalize over the long term. Understanding both the industry and the enterprise-specific market potential of these areas will help individual companies identify the initiatives that lead to the most promising solutions for their unique business objectives. Those that have been successful at traversing this new ground have been so, at least in part, because they have built healthy relationships with partners that bring market insight or help to build capabilities that are designed specifically for their sustained growth.

Read the new white paper or contact me directly to discuss further.

A digital reboot

It is almost two years since I last posted anything on this blog.

A lot happened in these two years. Remember in 2014 when every business conversation was all about Facebook, social and the social enterprise? And before that it was all about having a mobile presence and platform? And then it was all about big data?

2015 brought all these buzzwords together and to the ground. Suddenly we had the technology to carry out these ideas and conversations on how to use these technologies turned into mainstream. It was not just about outsourcing any longer. Nor was it just about technology & IT supporting business – it was much more now.

It was about leapfrogging transformation using the possibilities of technology.

Technology moved in 2015 from the back office to the enterprise front office. Enterprises are more interested in following technology trends in the Silicon Valley than they are in following best practices in the Outsourcing Capitals of Asia. Also the questions they ask themselves have changed – while operational excellence is still important, there is a limit to how much you can grow a business by cost cutting.

Technology can now drive revenues if you leverage it, or can bring down your current business model like a house of cards if you ignore it. And it is not about the individual technologies themselves, it is about how you can orchestrate this along with old & new business process to create new businesses & customers you never thought possible.


And when you are right in the midst of this storm as a consultant, you realise very quickly that what started out as a “digital transformation” is actually a business transformation.

Client conversations are very different today – they are all about how to steer and manage this change. Business Transformation using Technology can be intimidating – triggering more resistance than action.

Multiple topics that I had blogged about have now taken on a new meaning in this situation.  And almost every question that I had asked myself or explored 2 years ago just got challenged. There are new topics to explore.

It is time to reboot my blog.

photo credit: 8 via photopin (license)

Are you ready to deliver your newly signed IT contract?

As this new business year begins, I want to start with a post that sets the note for my study topic in the first quarter of 2014. It is the demand for continuity in every sense – after signing an IT contract – a topic often ignored: SERVICE TRANSITION.

Service Transition has nuances that go beyond what is prescribed in ITIL. Discussing and negotiating large IT contracts requires more effort than one actually thinks it should entail. This often leads to the result that people are singled out for 5-6 months to design and close the contract. On the IT buyers side, this is usually treated as a project with definite goals. The providers put together what is generally known as the pursuit team (with sales, technology experts and solution architects).

The solutioning and contracting process is an effort intensive task and involves the above teams sitting together for 5-6 months – meeting every day and having very detailed discussions on how to set up future operations. Depending on how such discussions go and the amount of value each side finds in the other, these teams either grow apart or get closer to one another. The presence of an advisory party helps to bring balance into the contract in both situations. I have been on all sides of this table – and it has been a 360 degree learning experience. It is amazing how your current viewpoint can affect the way you interpret a situation.

After the contract is signed, the IT buyer team is relieved on having completed the “contracting project”, and the Pursuit team celebrates having won the contract. What both teams often neglect to do is getting their individual organizations ready for the contract.

After seeing many initiatives go through the trough of despair after signing the contract, here is my thought-jogger list:

If you are buying IT services…

Have you thought of:
a) Does your team know the scope of services of the contract?
b) How should your team reorganize themselves to face the new provider – what are the touch points?
c) What are the new roles and responsibilities on your side of the contract? Have you designated a Service Transition Manager?
d) Who should perform the above roles? Are they equipped for these roles (skill & experience)? Are they enabled to perform these roles (time and authority)?

If you are the Provider

Have you thought of:
a) How do you transfer the knowledge gathered during the contract into the delivery organization?
b) How do you ramp up to get the people who you need on the ground asap?
c) How do you continue the relationship that you have just painstakingly built over the last months with new members entering the picture?
d) What should you set up to manage the contract that you have negotiated?
e) How can you transfer the good relationship and rapport that you have built up with your new client to the future delivery team?

Ideally you don’t want to bring in the Delivery organizations on both sides into play only after the contract is signed.

Such discussions and decisions should happen long before. There is great benefit in bringing these parts of the organization earlier into the picture as periphery teams. The core teams should involve the periphery teams gradually into the discussion in the later stages of the contract. This should go through the stages: BRIEF THEM ON SOLUTION, INVOLVE THEM IN THE DISCUSSION, INVOLVE IN DECISION MAKING. Both sides should perform a contract readiness check based on questions such as the above.

I will be spending the first quarter of 2014 exploring the multiple facets of this situation. Stay tuned.

photo credit: qwrrty via photopin cc

Will data growth overwhelm your data sensitivity policy?

Most conversations with End User Computing service providers noticeably center around service catalogs and service levels. In the heat of the discussion, there is one topic that sometimes gets neglected – Media Sanitization i.e. how is erasure of data dealt with after media is recycled.

And while firms are focusing on immediate insight coming from constantly growing information stores, Media Sanitization grows in importance.

Sometimes such a conversation ends with the realization that the guideline for data erasure and media sanitization has not been fully thought through. This goes beyond decisions about what happens to the data on laptops, phones and other devices after their time is up. What about the application data residing in your data centre? If you have a BYOD approach, this gets even more complex. Think of the implications if you have an in-house mobile application that accesses a CRM solution installed on the iPhones of your employees.



Media sanitization as a topic cannot be delegated to the infrastructure provider management. You need a holistic approach towards data erasure. The journey starts much earlier – the concept for data erasure should play an important role in your storage, labelling and media reuse strategy.


The Levels of Data Removal

Richard Kissel from NIST makes the case for three types of data removal – this is directly related to the Types of Sensitive Information you might identify for your organization:
a) Clear – where you erase the data on the device, but a tool like Unerase can easily recover this deleted data,
b) Purge – where you use much stronger cryptography (logical) or even physical means to remove data so that it cannot be recovered even by using state-of-the-art laboratory techniques. But the media can be reused and handed over internally to other employees or even externally via a shared device pool.
c) Destroy – where you not only purge the data, you also destroy the media storage device permanently so that it can no longer save data or be read. This is potentially your option for highly sensitve data.

Design data storage for Erasure

  • The best way to setup the framework for clean data removal is to label data when it is created.
  • Create a Data Sensitivity Classification Matrix.
  • The nature of the data should guide multiple decisions: How data is handled, where is it stored, on what devices and in what ways is it made available, and ending with how it is expunged.
  • Based on what data the media has been carrying, you also need a policy on whether this storage medium gets reused, recycled or even in extreme cases destroyed permanently.

Make it easy to decide

  • Setup easy to understand guidelines for criteria for labelling data and categorizing the security level. Draw inspiration from the Guidelines for Media Sanitization from NIST.
  • Ensure that your operations and processes can identify and carry out the necessary steps – based on whether it should be cleared, purged or destroyed after the storage media reaches the end of its lifecycle or exchanges hands?
  • Go beyond the devices that you control – look into your BYOD approach to decide what services you will make available.

Verify and leave an Audit Trail

  • If you are in a conversation with your data storage provider, guide your provider so that he understands your purging processes, what actions are required and what triggers these actions. Build this into the Statement of Work; it is not enough to add an Addendum in the contract with your data security guidelines.
  • Your erasure methodologies should leave a paper trail that documents all the actions as per the erasure guidelines.
  • An additional spot check periodically by an auditing department will additionally ensure that such erasure guidelines are being kept.

In a world with constantly increasing data being created and stored in myriad storage mediums, media sanitization is critical and unfortunately neglected. Early actions that you can take now to fix such potential leaks will go a long way in ensuring that your data sensitivity needs are covered.

Have you taken care of your Media Sanitization requirements?


photo credit: inf3ktion via photopin cc

Are you using the right measures to control your operation?

In my last post, I dealt with how you can use service levels to demonstrate the IT Value of your services.
Now we get into the engine room – we deal with how you should use service levels and operational level agreements to measure your own delivery operation. Your “delivery landscape” will be a myriad mixture of the services delivered by your own team, adjacent departments and the providers who handle the scope that you outsourced. I will not dive into how to manage each of these components since this is a subject in itself.
Instead I want to deal with the service levels and measurements that you should have in place for these interfaces.  While designing service levels for such interfaces, I have learnt to appreciate the difference in their nature to the ones that you used towards your business services.


The questions that you would now ask yourself are very different in nature to those that you use to demonstrate your value. Irrespective of whether you are measuring service levels with your provider, or setting operating level agreements with adjacent departments, the questions that you should ask are:
a) Can you use these measures to control and manage your delivery landscape?
b) Can the levels that you have set for yourself be attained?
c) Are you able to measure these service levels properly?

Can you use these measures to control and manage?

  • Control does not mean measuring each and everything that you can. Seek Emphasis in Service Levels. Sometimes the ease of measuring something serves the propensity to measure and report it.
  • Differentiate between measuring (for the sake of control) and reporting (for the sake of understanding) metrics.
  • Before you delve into finding out what you can measure, rather first concentrate on “what must you measure and why?”
  • previous exercise with your business department would have shown you how the service that you deliver inter-twines and actually affects the business operation
  • This will tell you whether you should look out for critical timelines, large transaction volumes, accuracy,…?
  • Concentrate on the few key measurements that you can actually use to control the key parts of the service components that you are managing. Derive these directly from the understanding of what makes or breaks your business.

Can the levels that you have set for yourself be attained?

  • Defining a Service Level Objective for each of your service components are not enough. You should  set values that are attainable within the costs or boundaries of delivery that you have been given.
  • It is no use to accept a 99.9999% availability target from your end client when you are unable to break this down across your applications and infrastructure or are not able to deliver this to within the budget constraints.
  • This is particularly important when you are getting zealous in managing your provider. In your eagerness to measure and manage your provider, you might be setting target values that are either too expensive.
  • After you have set values that you can attain, find ways to actually control how you attain these values. I have seen some IT managers cleverly build in latency into their system – a latency that can be gradually removed as the load on the system increases. These are many such smart practices one can borrow from system architects.

Are you able to measure these service levels properly?

  • There are miles between the intent to measure something and actually being in the position to do so. And the more holistic the intent (like Business Impact), the more precise you need to be in how you measure it.
  • So as you design your service levels, make sure that you actually know how this can be measured. I once had to convince a client that a particular Service Level Calculation (System Availability) was not fully thought through – it took us two hours to could come up with a formula to measure it.
  • Do the math. Start by very carefully designing the algorithm and formula for the measurement – what goes into the numerator? what goes into the denominator? What is the sample size of measurement? Are you aiming at a %age based measurement, or a number based measurement? What are the implications of both?
  • Then ask yourself: Do you have the service management capacity to measure and follow up all that you have designed?
Best practices from the industry, Service Level Agreement Examples, internet searches, and tool vendors will give you a plethora of choices of what you can measure. Reading about and gathering such measures is the easy part. The tough part is making the choice of what you spend your precious service management capacity on – and the art lies in translating this in summation to your understanding of how you are delivering IT value.

photo credit: Lifelog.it via photopin cc

Why Accuracy SLAs can create or destroy the value of your service

SLA literature in the marketplace waxes eloquent on topics like Availability and Performance. However one of the most ignored topics in an increasingly data-driven world are service levels that deal with Accuracy. Not paying attention to demonstrating accuracy can poke large holes in the value of your cloud and big data solutions. Here is how you can address such gaps.

Connecting to the Interweb Tubes

At first glance, Accuracy sounds soft and qualitative. A recent deep dive into this topic forced me to look the dimensions of Accuracy and I emerged with two aspects: Data Accuracy and Process Accuracy.

How Accurate is Your Data?

Accurate data is the basis of decision-making. In today’s world of big data and cloud enabled applications, where data resides physically in multiple locations, data accuracy is of prime importance. Lets look at two aspects of measuring data accuracy integrity and recency.

Data Integrity

  • This is a measure of how data is protected against corruption through logical errors, user input errors or hardware errors.
  • If data integrity cannot be ensured, this has a severe backlash on the quality of service that your application is providing.
  • A system which cannot guarantee certain levels of data integrity is of not much use even though it might satisfy high performance and availability SLAs.
  • So while ensuring that your application performance and availability, also ensure the same for your data.
  • So how do you measure data integrity? Data Profiling is a common approach towards measuring data integrity.
  • There are multiple technical solutions (as a Google Search on “measure data integrity” will reveal) which I will not cover in this blog post.
  • Focus on how to demonstrate measures for Data Integrity with your SLA Definition. 

Data Currency:

  • In an information-hungry world that relies on big data and predictive analytics to solve problems, the rate of data gathering and capture is increasing exponentially.
  • Data in such real-life databases can become obsolete rapidly.
  • Capturing data across various dimensions can sometimes led to multiple values of the same entity sitting in a database.
  • What is worse: some of these values would have been one correct – but most may have lost their recency and turned stale.
  • This can skew data-driven decisions badly especially when layers like predictive analytics pre-process data and you rely on the interpretation.
  • Sometimes such interpretations cause automatic algorithms to take actions which worsens the problem.
  • With distributed databases and data-warehouses spanning across different locations, latencies can introduce data currency errors too.
  • Especially in a high volume transaction system, such measures are critical.
  • If this is your world – then your SLA Management should demonstrate how good your application or your service is able to correctly identify the current value of an entity and answer queries with these current values, in the absence of timestamps? 

The Human Side: Process Accuracy

We should not forget the human side of data handling – this is where the second aspect of Accuracy comes in. And this is process accuracy. How accurate is your data assimilation process?

  • A typical data-warehouse system relies on multiple data feeds.
  • The number of such feeds continuously increases as the complexity of the application and data landscape increases.
  • Most organizations have very complex Extract-Transform-Load stages that make logical sense of the conglomerate data out of such feeds.
  • These are often very complex job control algorithms that are built in the form of workflows.
  • As the number of feeds increases, the complexity of such algorithms exponentially rises.
  • This reaches a point that logical errors creep in due to human design. This article talking about ETL architecture will give you a feel of how human intervention and decision making can impact otherwise sound data.

The human impact of your data

  • Performance data is an excellent example to explore the human impact of data.
  • Such data is the basis for financial rewards and career-making decisions.
  • To demonstrate value in such an environment,  you have to be able to demonstrate the accuracy of:
    • people filling forms or data in a database,
    • whether the right and complete data is being extracted for analysis,
    • whether all data is being used for analysis? what analysis algorithms are being used? Are they applied uniformly?
    • How is this analysis being interpreted? How are conclusions being drawn?
  • If your service is a Human Resources Platform as a Service offering, Accuracy measurements and SLAs for each of the above questions is critical to the value that you are able to offer
  • Sometimes this can be more important than the performance and availability of the system that you are running. Stacey Barr in this article raises some important aspects of the human side of data.

Are you creating value with Accuracy?

Depending on how data intensive your service is (large volumes, transactions, data-warehouses etc.), the concept of Accuracy will play a large role in how your service is being perceived.

Formulating an Accuracy SLA Definition is very situation-based. There is no industry standard. The environment that your service serves will show whether you should you be looking at duplication? or consistency and synchronisation? or data coverage?

Just like Performance SLAs, you are on the right track when you study the needs of the business that you are serving, and then look at how these needs depend on the different quality dimensions of data in your service that you offer. Here is your opportunity to demonstrate the value you are creating in numbers.


How data intensive is your service? Have you explored how Accuracy based SLAs can create or destroy the value of your service?

photo credit: nickwheeleroz via photopin cc

Wordpress SEO Plugin by Wordpress SEO Plugin