If it ain’t broke, fix it.  And fast.

If it ain’t broke, fix it.  And fast.

Steve Prefontaine

Steve Prefontaine of Oregon set a U.S. record in the 3,000-meter race on Saturday, June 26, 1972 in the Rose Festival Track Meet at Gresham, Oregon. His time was 7 minutes, 45.8 seconds.

I bet you read that headline twice, didn’t you?

It may seem backwards, but stay with me here.

Early this week I was with a Senior VP of Operations for a Fortune 200 company. I asked a few of my standard battery of questions about business operations, operational excellence and what new initiatives that were underway. 

When I asked about business operations, an innocent and perhaps an overly broad question, I received a vigorous and somewhat surprising response.

Here’s what he said.

“Everything is going great!  All of our operating metrics are trending up.  Quality and pace are improving.  Costs are dropping, particularly as we redeploy to the cloud.  However, we are starting from scratch in many areas and dispensing with our traditional methodologies and recreating how we function despite solid results.

Counter intuitive?  Perhaps not.

I once heard a saying that went something like this.


“Your processes are perfectly designed to get you the results you are getting right now.”


Read that again. It is a powerful statement.

So are the results you’re getting now where you want to be? I don’t know an executive who would say yes to that question. And you shouldn’t either.

My friend has a keen grasp on the macro operating environment, e.g. beyond the horizon of his current operating responsibility set.  Knowing that his competitors may leapfrog him as they adopt new technologies, keeps him on his toes.  Even those competitors who may not share the same level of excellence his organization has created.

The business playing field can be tilted even more in favor of companies who innovate, ensuring they stay at the forefront of increasing business velocity. Particularly for those who do it before it becomes necessary. (Read: too late.)

 To tip the scales your way in the world of ‘every company is a software company’, you must:

  • Implement new principles like DevOps, Agile, and continuous improvement
  • Be driven by the knowledge that what is new today will soon be old
  • Prepare for (or better yet, invent!) the next set of industry changing technologies haven’t even been developed yet  

So keep breaking, reinventing and stay ahead of the also-rans!

After all, where would Steve Prefontaine have been if he’d said, “I am an extraordinary runner, I think I’ll stop trying to improve now.”

TurnKey Solutions, test automation, Evergreen Automation

Peer-to-Peer: How Record and Playback is Killing Your Productivity

Illustration and Painting

I am often asked about the difference between record and playback testing approach and data-driven testing methodology. This post outlines the difference between the two, and illustrates why one of them is killing your productivity.

Record and playback testing methods were developed in the 1980’s, and were a great use of technology at the time. It allows business users and/or quality assurance testers to walk through a business process or test flow one step at a time while it records each screen, mouse click and data entry the user encounters.

The result is test cases that follow a single path through the application under test, with very specific data for that path. The user then walks through that process again to capture a different path in the process, which is required in nearly one-hundred percent of the cases.

Compared to manual testing, this was clearly an improvement and gave many organizations their first taste of automated testing.

Sounds great. So what’s the problem?

If your processes are very simple and rarely change, this could be an excellent solution. In most organizations, however, the applications that require comprehensive testing are complex applications that change frequently.

Imagine in the scenario above what would happen if a field was added to the screen that had already been recorded? Or the test path changed? Or a data-dependent operation was modified? And imagine if you had to run that test 50 different times with 50 different sets of data? You guessed it. One would have to re-record the process each time to get that single-path test case.

Two fundamental problems with this approach are:

a3453bbc393e356a05743e72a47f1d80c512586955775bccf4pimgpsh_fullsize_distr-750x388

PATH-LOCKED:   In record and playback, you are immediately “path-locked.” Path-locked means that the test cases created with record and playback are recordings of a single path in a business process. If one small part of that flow changes, such a new field on one of the application screens in that flow, the scripts have to be either found and edited, or completely re-record. Now consider how often your applications under test actually change. For most companies this is hundreds of times a year.

Path-locked gives a cloudy picture of your testing, at best.

Path-locked gives a cloudy picture of your testing, at best.

This spawns a related challenge in creating a vast body of potentially useless recordings and no easy way of knowing what is valid at any given moment. Companies often end up with different people doing this work, which also means that often times the names of the files are inconsistent. This makes it harder to find the right files. Rarely do people go back and archive or dispose of outdated recordings, leaving you with a multitude of test cases and no true way of know what is valid anymore.

Unfortunately, I have seen many companies simply start over with their record and playback, scrapping the time, effort and expertise that went into creating these assets, because it is easier to start over.

TEST COVERAGE:  Record and playback makes it difficult to get a handle on your test coverage. It is nearly impossible, without a lot of manual work which is what most companies are trying to get away from, to lay out the business process visually to make sure you have the right test coverage in the right areas.

Test Coverage into the breadth and depths of test coverage is crucial in ensuring defects are found. Ensuring adequate test coverage is even more important in highly regulated industries, in companies that rely on their applications under test to run their businesses, and in organizations that require a high degree of accuracy. In most cases, that’s every medium to large business out there.

What is data-driven testing?

street signpost

Data-driven testing, sometimes known as keywords-based testing, is a testing method that is driven by the data.

For example, when using TurnKey’s cFactory for you automated test creation and maintenance, Education concept: Learn words on computer keyboard background. 3D render.you’d click the “learn” button on every screen as you walk through the process. Test components are automatically built for everything that can be interacted with by the user that appears on the screen. This includes check boxes, data verification, order of operation, click buttons, and more.

After having walked through your process, an Excel datasheet is automatically created
showing every single component field in the process, allowing you to drive any combination of data through your test.  For each component, there is a screen shot attached so you know exactly where you are in the application.

Once the process has been “learned”, you can now execute multiple scenarios through the business components, and multiple data scenarios at the test case level. This is the essence of data-driven testing.

Data-driven component-based testing has enormous cost-saving benefits, including:

  • 90% increase in test coverage: companies have seen a 90% increase in their test coverage simply by having cFactory automatically create their test cases.
  • Test cycles reduced from months to days: Almac went from a 3-month to a 3-day test cycles with automated maintenance– a patented process by which cFactory detects changes in your application and automatically updates your test cases.
  • No programming required: the user interface is designed for non-technical users (most often the people who are closest to the application under test).

When people ask me about the difference between record and playback, or data-driven testing technologies, I sometimes think about the difference between the linear, fixed (path-locked) cassette tape recordings and the decidedly non-linear world of digital music. It’s kind of like that.

rp

If you found this blog useful, check out these:

Evergreen Automation: Solving A Classic Test Automation Problem

The Classic Achilles Heel in Test Automation

Achilles Heel One of the classic challenges in testing, and the Achilles heel in many a test automation program is the difficulty in keeping test cases current as changes are made to the applications under test.  While some applications remain reasonably static, there are others that are much more change-centric and often extremely complex.

As the pace and velocity of business accelerates, this change-centricity increases, putting more pressure than ever on application owners and QA professionals to test faster/test better.

Often the fallout of this pressure to test faster creates unintended deterioration in the test automation assets. This happens quickly and generally results in redevelopment of the test assets, as it is often easier to do so than to unravel the changes that must be made to a test case to make it useable again.

Hardly what most of us would call intelligent automation!

 

Test Case Half-Life

This chart illustrates the natural degeneration of test cases through successive application changes:

Half Life graphic, test automation

 

Patented Solution: Evergreen AutomationTM by TurnKey

TurnKey was recently awarded a patent for our innovative application-aware capability that we affectionately call “maintenance mode,” a capability that solves the test case maintenance issue for good.

By detecting the changes in the application when they differ from the test component in use, graphically showing the changes and then automatically updating not only every test case affected, but also the data management assets associated with the test cases and test sets, we create an “Evergreen Automation” process designed to keep test assets and your as-built application in sync at all times.

This is why we call it “Evergreen Automation.” It allows you to keep your test assets fresh and up to date at all times.  We do this for all of your applications, including those mission critical enterprise applications that run the business.

Evergreen Automation, test automation

Our patented, application-aware capability can have a profound effect on the value of test automation, eliminating the extraordinary, largely manual process of test maintenance.  This allows users to focus their energies on what really delivers value; more frequent test cycles, broader and deeper testing, more rapid deployments and ultimately big improvements in application quality.

 

Additional Resources:

To CIOs: Did Critical ERP Security Updates and Patches Cause the PoS Breach?

Image of cyber attackRecently an article appeared on the Computing UK website entitled, “Oracle attackers ‘possibly got unlimited control over credit cards’ on US retail systems, warns ERPScan.”  The article talks about exposure to potentially every credit card used in US retail as a result of the control hackers gained to Point of Sale (PoS) systems when exploiting a vulnerability. 

This isn’t really new news; we have all become somewhat numb to the stream of stories of account information being compromised from these incursions.  The security software industry is focused on prevention of these data breaches and enormous amounts of time and money are spent to secure corporate systems.

What does a PoS breach have to do with application testing?

Great question…glad that you asked.

No one would deliberately ignore application of these security updates if there weren’t significant operational barriers to doing so. 

So, why is it so common to have substantial delays in applying the latest updates and patches?

Enterprise applications present numerous unique challenges to timely application of patches, updates, support packs, etc. Just a few that impact an organizations ability to deploy updates on a timely basis:

Integrated Solutions – Enterprise applications are typically tightly integrated solutions, with many functional modules.  A change made to one area of the application often affects other parts of the application.  These metadata based applications must be thoroughly tested to ensure that changes and updates to the application do not have unintended functional consequences in other parts of the application.

Highly Customized – Virtually every company modifies the software provided by the vendor to reflect the unique nature of their business.  It is more common than not to have an application like SAP or Oracle EBS be 30% or more customized, which of course means that the vendor providing the patch or support pack cannot tell you what the impact on your system will be.  It’s up to the user to validate that the application functionally works as expected, in its entirety.  Customers are also concerned that any update by the provided by the vendor could negatively impact customization they have applied to the application causing even more re-work and business process validation.  Which leads me to my final point.

Mission Critical – These applications are often the heart of the enterprise.  These applications run finance, HR, sales, supply chain, distribution…virtually every function of the business.  A production outage, even briefly, can have enormous consequence and impact.  Therefore, and rightly so, it makes sense to proceed with extreme caution before introducing any change into the production system. 

Image of man with watchBecause the process of fully testing the applications is essential, but lengthy and resource intensive, it isn’t unusual for these patches and support packs to remain in queue to be combined with a broader set of changes so that the testing process can be done all at once.

The risk of leaving the systems vulnerable is balanced against the business risk of impacting production systems, as well as the time, cost and complexity of actually validating applications.  This is where effective business process validation software products can shrink this gap and eliminate the trade-offs inherent in the standard decision process.

TurnKey Solutions specializes in tackling the hardest problems – end to end, cross platform business process validation of the most complex and important applications in an enterprise.  We can help shield companies from security risks as well as providing greater visibility, control and business agility to application owners. 

photo credit: Week 36 – Cyber attack via photopin (license)

5 Key Steps to More Effective QA Automation within DevOps

Practical advice on achieving quality driven DevOps…

DevOps White Paper_122015_final_

Thought for the day-

If you think about it…

Today you are as old as you have ever been.

You are also as young as you will ever be.

May as well make it a great day.

The two ways to get fooled…

There are two ways to be fooled. One is to believe what isn’t true; the other is to refuse to believe what is true.

Why I am ditching United ( and everyone else) for Southwest

By just about any measure, I am a frequent traveler. My rough calculations show that I have flown somewhere between 3.5 and 4 million miles over the last 25 years or so. Not sure what that equates to in terms of how much time I have invested in going to/from airports, time at airports and the time spent sitting on an airplane. Just the thought of all of those hours makes me wince.

So, for someone that spends that kind of time engaged in air travel, the kinds of things that make air travel tolerable are certainly magnified. Decent fares, good customer service, on time arrivals, baggage that gets there when I have to check it, customer service…all essentials.

Which brings me to my switch. I have been an extremely loyal United flyer, having recently exceeded 2 million air-miles with them! I have flown roughly the equivalent of going around the planet 80 times with United. On the flight that I cracked the 2 million mile barrier, I did get a handwritten thank you card from the crew…delivered to me in the back of the plane. They also offered me a free drink, but I had to pass as it was 8:00 in the morning, although I will confess to considering a bloody mary.

2 million

 

So…why change? Why move carriers after all of the status that I have earned? It comes down to three things, all of which play in Southwest’s favor.

First, we all know that accumulating and using miles is important. Airlines reward loyalty by granting perks and mileage increase awards based upon your travel volume. These miles are valuable…nothing like a free trip right? (As though I want to get on another plane).

United has devalued their miles to the point that they are almost meaningless. They claim that there are “saver” seats available for a 25K round-trip, but I cannot ever find them, regardless of how far in advance I am booking. Case in point…recently tried booking two round-trip seats for late September to St Louis, a reasonably short hop from Denver. 3 month advance booking….50,000 miles each, for a total of 100,000 miles. A full years’ worth of travel to fly the two of us 90 minutes away! Similarly, when trying to use miles for a trip to Peru, I didn’t have enough miles in the bank for two tickets as I needed 300,000. Contrast that with a recent trip to Salt Lake City on Southwest where we spent a grand total of 17,000 miles for two seats.

30,000

 

 

 

Secondly, the matter of change fees. Some airlines have gotten very creative with the various ways that you can be charged, from bags to drinks to meals to which seat you select. Change fees fall into this category. The $100 change fee, plus the fare difference (which I can understand) seems usurious. Maybe after 2 million miles we could show a little flexibility? In the age where most bookings and modifications are all self service and done on the website as a kiosk without any support from the airline personnel, it becomes obvious that this is simply a penalty and another means for pure profit for the airline at the expense of customer service/convenience. I recall standing at the gate of a Frontier flight that was almost empty, but couldn’t go standby unless I paid a $150 fee. No cost to the airline to accommodate me…as a result I haven’t flown Frontier since.

Lastly, no one that I know gives high marks to a US carrier for customer service. Fly Air Singapore, Cathay Pacific or Virgin, as examples, and you notice the differences immediately. Most lists of top global airlines don’t have a US based carrier in the top 20. But for those of us who travel mostly or exclusively in the USA, you find Alaska Airlines, Virgin, Jet Blue and Southwest typically at the top of the list. However, if you value mileage programs, customer service with transparent fare pricing and travel flexibility, and just as importantly network and frequency of flights, Southwest separates from the pack.

Yes, the Southwest boarding process is a little funky, but everyone seems to get it and it works. Not really much different that the zone boarding used by United anyway.

One more thing that matters, although this is very subjective. I think that the Southwest crews are a lot friendlier than any other airline (apologies to my friend Lisa!). It seems to be the case of reality meeting the marketing hype, but it can make the drudgery of flying just a little less onerous when the flight attendants are cracking jokes and plying you with free drinks and the ubiquitous packs of peanuts and pretzels. You can fake a smile at the boarding door but it has to be hard to pretend you are happy for the entire flight! I couldn’t pull it off, myself.

My change isn’t 100%, or permanent. I will still shop for the best fare (within reason, which leaves out Frontier and Spirit), and international flying is a whole different animal. But my loyalty comes with a price, and when I am rewarded with stiff senseless fees, devalued miles, less than decent service…my business is up for grabs.

Offshore manual testing isn’t automation

I recently read an excellent article authored by Phillip Howard of Bloor Research.  The article, entitled “Testing is Broken”, describes at a high level some of the institutional and practical challenges to effectively addressing the reasons why testing and software quality assurance remain largely a manual process.  Mr. Howard briefly touches on most of the substantive barriers to widespread test automation tool adoption.

I spend much of my time with executives that are responsible for software quality assurance.  In the case of complex enterprise applications, this translates into the need for end to end, cross platform business process validation.  Sounds complicated?  In actual practice it is even much more complex than it appears.  Why is this?

Enterprise applications (think SAP, Oracle EBS, and other monolithic applications) typically share a number of unique characteristics that make the business process validation process daunting.  Most of these applications are:

  • Mission critical
  • Highly customized
  • Integrated with other applications/technologies
  • Change-centric

Often, the tools and processes used to ensure high levels of quality for these high-risk applications aren’t sufficient to accomplish the objective.

In many cases, the approach to resolve this has been to outsource manual and semi-automated (scripted) testing to off-shore outsourcers who help reduce the labor expense.  This is frequently referred to by the end user as “their automation solution”.  Unfortunately there is a notable lack of genuine automation in that approach.

Low cost manual labor isn’t automation.

Senior IT executives will tell me, with a straight face, that their automation solution includes hundreds, and sometimes thousands, of outsourced manual testers.  This means that what we call automation is actually manual, repetitive, and error prone, which translates into severe limits in terms of scope and complexity.

Even the firms that outsource the development of semi-automated “scripted” tests are in danger since the people with the technical skills necessary to develop these scripts rarely understand the applications under test and they are too far removed from the application owners, who aren’t scripters.  The gap between these two skills creates obvious problems.

Furthermore, the people who are developing the manual tests rarely understand the applications under test since they are too far removed from the application owners who don’t know how to write tests or don’t have the time.  Yet these are the people who own the responsibility of ensuring that the applications function as required.

As Philip mentions in his article, the service providers have no incentive to change either dynamic.  They are, however, greatly incented to perpetuate them because they stand to make more money this way.  Because the knowledge of the systems and the test artifacts are vendor-specific, it becomes extremely difficult to manage the provider and ensure that the process is efficient, not to mention have them take ownership of the process and result.  The resultant cost, inefficiency, and lack of tangible results becomes the norm and a vicious cycle of hiring/firing offshore service providers ensues.

Today’s innovative test automation tools tackle the inherent problems with manual testing.  Modern architecture eliminates the need for highly specialized technical skills to create automation manually, putting easy to use technology into the hands of the business users that understand the application.  Testing resources can greatly expand the depth and breadth of the application coverage, resulting in more agile, rapid turnaround and improved overall quality.

The cost of poor quality can have a disastrous effect on a business.  Enterprise applications, often used for everything from finance to HR, manufacturing and supply chain management, measure production outage cost in the millions of dollars per hour.  Customer satisfaction and retention suffer.  Using today’s test automation technology can replace the cost, inefficiency and limitations associated with manual testing and deliver substantial operational and financial benefits.

So don’t be fooled…lowering the cost of manual testing and scripting isn’t automation.

 

The Cost of Doing Nothing

In my business, probably like many of you, one of the critical steps in a sales/buying process is establishing the business case for acquisition.  Assuming that the technical and economic buyers are on board and established that your company and solution are VOC (vendor of choice), there is typically a gauntlet of approvals necessary before a sale takes place.  How onerous this process is will be dependent upon transaction size…the larger the transaction, the greater the oversight and scrutiny and more levels of approval.

Most technical buyers are not experts in developing a business case, particularly one with a financial bent.  They are generally quite good at describing the operational benefits, but establishing the financial impact in terms that a CFO understands is more challenging.  Additionally, it often requires getting data that isn’t right at hand, then expressing value in a number of ways…IRR, MRR, NPV, ROI.  From a sales process perspective I have always advocated delivering a framework to capture and express financial impact, and helping translate the operational benefits financially.

Most companies that go through this exercise focus on ROI.  While this is an important metric, stopping there is stopping short.

No business case analysis is complete unless you also establish the cost of doing nothing.

Two things work against you when selling.  First, the general law of inertia.  Unless there is a real, acute problem, most people aren’t interested in incremental process improvement, particularly if it means more work assessing the offering.  Secondly, the corporate machine works against spending.  Most companies biggest competitor isn’t other providers…but rather prospects that don’t buy anything.

This comes at a cost.  And if you know what that cost is, using their numbers, it changes the dynamic.  As much as corporate procurement works actively not to procure anything, they hate losing money even more.  No one wants to hear that for every month that a decision is delayed, they are losing (or excessively spending) significant $$.  This quantifies the value of action and urgency.

This also really helps in the negotiation process.  Once, while negotiating a 7 figure software agreement, the “buyer” was pushing hard for about $100K in concessions.  The back and forth went on for a few weeks.  When I gently (OK, not gently at all) pointed out the the delay had already cost them about $1.3M, it brought context to the negotiation and also underscored the value of my products.  The negotiation ended very quickly at that point.

So do yourself and your prospects a favor…find the cost of doing nothing!