[go: up one dir, main page]

Mainframes and workload placement: Time for a dose of objectivity?

When Tony Lock and I interviewed Reg Harbeck, Chief Strategist at Mainframe Analytics Ltd, it was immediately obvious that we had different perspectives on the world. As a self-proclaimed ‘”mainframe evangelist”, Harbeck’s starting point for our discussion on the role and future of the IBM Z platform was – let’s say – pretty robust: “The mainframe is the wheel of business computing in so many large organisations and there’s literally no possibility of anything displacing it.” 

As someone with a background in ‘distributed computing’, which is traditional mainframer speak for all aspects of IT that exist outside the IBM Z world, Harbeck’s statement was a bit too black and white for me. It’s not that I’m a mainframe naysayer, but I personally think of the mainframe as just one of a number of important platforms that have their place in addressing core business computing needs. My colleague, Tony Lock, sat somewhere in between us, perhaps best described as a ‘mainframe advocate’, particularly in the context of business critical workloads. 

And so the scene was set for quite a lively debate.

Premise for the discussion

One thing we all agreed on from the outset was that the way workload placement decisions are made in many organisations is screwed up. More specifically, all three of us had witnessed applications and workloads that would be ideally suited to the mainframe environment ending up deployed on x86 stacks, either locally or in the public cloud. 

But is this anything to get worked up about? Does it really matter that much? 

Well, every workload placed on an x86 stack when it would be better served by the mainframe represents a direct incremental expense, opportunity-cost and/or service level or risk-related compromise for the business. This of course assumes you already have a mainframe. 

Put simply, the agreed premise was that the IBM Z is not getting its fair share of the action among mainframe customers when it comes to workload placement, and the sooner this is addressed, the better.

Isn’t it just that the mainframe is old, out-of-date and expensive?

Harbeck and Lock were having none of this nonsense. The two of them recapped – a bit too extensively if you ask me – all the arguments and proof points around performance, scalability, resilience, security, cost per unit of work, and oh so many other things. 

For the sake of this article, let’s just say that between them they convincingly made the case for the mainframe being a good place to run stuff, particularly things that are more critical or demanding in nature.

So what is the explanation for it not being as popular for new workloads as it should be? 

Myths, misconceptions and comfort zones

A reality often overlooked in mainframe circles is that most of the IT leaders, architects, program managers and others involved in choosing which platform an application should be developed for or hosted on do not have a mainframe background.They’ve probably progressed their careers working in and around Microsoft, VMware and other x86 stacks, and more recently cloud services such as AWS, Azure and GCP. To them, the mainframe is an unknown realm, and as I said in a recent article, it might as well be labelled “Here be dragons”.

Harbeck picked up on this and took it further: “People out in the ‘distributed’ world often don’t know what they don’t know when it comes to IBM Z. They sometimes accept myths and misinformation on face value, or just assume the platform must be out-of-date because it’s been around for so long.” 

The question was then, do they just need educating?

Lock then chimed in on this: “The trouble is that most of the people we are talking about would not consider themselves to have knowledge gaps, and the rest just wouldn’t be interested as they’re actually quite content with the status quo.” Harbeck agreed, adding: “It’s hard to change peoples’ minds. From their point of view, they see it as much easier to go with the platforms they know.”

The softly, softly approach

At this point I threw in an idea discussed during a recent conversation I had with IBM’s Meredith Stowell, VP of Worldwide Ecosystem at IBM Z & LinuxONE. This was to get mainframers and non-mainframers to interact and learn from each other through mentoring, reverse-mentoring or the formation of cross-disciplinary teams and working groups. Create environments in which people from different backgrounds naturally rub shoulders on an ongoing basis and, over time, minds become more open. Do this for long enough and the mainframe will quite naturally assume its rightful place.

It’s hard to argue with this kind of approach, and it’s something I’d definitely advocate as part of your long game. But what if you wanted something harder-edged that might make a difference sooner? As Lock said: “You need initiatives to bring in new blood given how many experienced mainframe specialists are nearing retirement, but you also have to take action that will make an impact in the shorter term.”

The more direct route

As we talked it through further, Harbeck filled us in on an idea that he had been considering. This was to take a page from the procurement playbook and adopt a more structured, RFP-style approach to making workload placement decisions. “Define and weight criteria, distinguish between mandatory, highly desirable and valuable requirements, then select the right platform based on an honest and objective assessment of suitability taking all of the relevant parameters into account”, Harbeck suggested.

In other words, rather than defaulting to the platforms they know best, decision-makers and influencers would be encouraged (even mandated) to define requirements clearly then make platform decisions objectively. In effect, this would put the mainframe on an equal footing with other platforms, so we’re no longer talking about its strengths generically, but highlighting its advantages in each specific context. No one argues that the IBM Z would ‘win out’ every time, but the approach would stop it being overlooked or ignored to the degree it often is today.  

As it happens, this pseudo-RFP approach is one we’ve advocated at Freeform Dynamics for a while. We’ve also been seeing an increasing number of IT teams start to apply this kind of ‘fit-assessment’ discipline as they’ve gained experience with public clouds. Indeed, the recent emphasis on FinOps is in no small part driven by workload/platform mismatches contributing to runaway costs. We’re also coming across more companies moving workloads back from the public cloud to the corporate datacentre – a.k.a repatriation – for a whole raft of reasons, but together these add up to highlight a workload/platform mismatch.

Coming back to the IBM Z, a practical point to remember is that it’s clearly not just the inherent platform attributes that matter, but also things like workload proximity to data and related applications. This is a big consideration given that a lot of the data required for advanced analytics and AI solutions, for example, currently resides in mainframe storage.

It’s ultimately about governance and discipline

Zooming out, whether you call it hybrid-cloud or hybrid-IT (we prefer the latter) the idea of mixing and matching the use of different platforms shouldn’t be a licence for every application team or developer to select a target environment purely on preference, familiarity or habit. Allowing platform options to be closed off due to prejudice, myths or ignorance is equally counterproductive. As is so often the case, good governance is the key. 

CIO
Security
Networking
Data Center
Data Management
Close