Short bio: Computer Scientist, FOSS supporter (read more)
Tux Machines (TM)-specific
By the early eighties data processing had won most of the battles on the strengths of a single argument: that efficient use of corporate resources required a single corporate systems architecture - which only they were in a position to design and implement. Suddenly people were doing production planning in batch mode on MVS, IBM was selling a lot more gear, and machines from companies like Harris, Gould, Wang, and Data General were being moved out.
One reason this argument worked so well was simply that it’s valid: a consistent, centrally administered, IT infrastructure should be more efficient than a bunch of IT islands working either separately or against each other.
One area where an echo of this affects those of us working with Unix is in the issue of appropriate staffing for Linux deployments, because the general corporate strategy of standardising the platform and hiring accordingly is an echo of that argument from the 70s. Organizations standardising on Red Hat Enterprise Server generally try, for example, to hire people with Red Hat Enterprise Server experience and then press the combination as the one size fits all solution for whatever Linux needs line managers may have.