Open Source, Artificial Intelligence, and LPI
I'm going to lead with the punchline on this one. I believe that LPI should invest in providing a certification path for some kind of machine learning, specifically geared to open source development in artificial intelligence.
Whatever you may think about automation and artificial intelligence from the perspective of what it will eventually mean for humanity, there's no question that some form of artificial intelligence is present in every aspect of our lives. Those of us who own one or more Google Home or Alexa speakers know full well how much AI touches our lives. For us, it's an ever-present companion.
Smart systems like Google's Assistant are built using TensorFlow ( https://tensorflow.org ), an open source programming library that has become a kind of goto set of tools for anyone building machine learning, deep learning, natural language processing (as in your smart speaker), or neural network based applications. TensorFlow based applications are programmed using Python, another free and open source development platform.
Speaking of Python, there's also PyTorch ( https://pytorch.org ), a set of deep learning Python libraries that is built on Torch, yet another machine learning set of tools developed, this time, by Facebook. It's primary purpose was computer vision, facial recognition, and natural language processing.
Keep in mind that there are already plenty of AI and ML tools out there, built with and distributed as open source. We also have organizations that are dedicated to AI and ML being entirely open. For instance . . .
H2O.ai at https://www.h2o.ai/
AI.Google at https://ai.google
OpenAI at https://open.ai
While I understand that the focus for LPI has been to champion Open Source and to help build the futures and careers of Linux systems administrators, including DevOps, machine and artificial intelligence tools are making their way into every aspect these professions. In fact, the smart sysadmin has always sought to use the tools at their disposal to automate as much of the processes regarding administration as is possible with the available technology.
As systems get more complex and distributed across both the physical and virtual world, a simple hands-on approach is no longer practical. Automation is key to keeping things running smoothly. Even so, simply replying on these automated systems to spit out interpreted logs doesn't help if there isn't someone there to respond should something catastrophic happens. That's why we've been automating a variety of responses based on selected events. We can tell our systems, "Only call me if it's really important." Only tell me about those things that actually require my intervention.
Trouble is, those complex distributed systems I was talking about are getting more complex, and more distributed. At some point, human intervention, by even the best and most highly trained human, becomes a bottleneck.
Have you heard of DeepMind? This machine learning startup was bought by Google (technically Alphabet, but I still think of the whole thing as Google) in 2014. In 2015, it's AlphaGo program beat Fan Hui, the European champion Go player, 5 games to zero, in a demonstration that a machine learning system could learn to win a game so complex, with so many combinations and permutations, that it was deemed nigh impossible for a computer to win.
AlphaGo continued to flex it's machine learning muscles until, in 2017, it beat Ke Jie, the reigning world champion of Go.
Later that same year, a next generation system, AlphaZero taught itself to play Go in less than three days then went on to beat AlphaGo 100 games to zero.
Fast forward to 2018. Alphabet (who I'll probably just keep thinking of as Google) turned DeepMind loose on its monolithic data centres, giving the algorithm complete control over the cooling of those data centres, providing Alphabet with a 40% savings on ventilation, air conditioning, and whatever other cooling measures might be used. No humans are involved or required. This is data centre infrastructure management, fully automated.
It is, in fact, the logical end goal of every sysadmin.
So, am I suggesting that LPI should get behind and provide certification for a technology that will, if all goes well, do away with the need for systems and network administrators? In a word, yes. The next logical question is why?
Since full automation is the logical end game for what we've come to think of as systems administration, and since pretty much all of this smart technology runs on Linux servers, and is built on open source software and tools, we must embrace the technology and direct it, making sure that intelligent machines have our collective human best interests at heart. I don't know how long it will be before the last sysadmin retires, but that day is coming whether we are a part of it or not. It behooves us to make sure that when fully autonomous systems take over, that we have done everything we can to make sure that they operate on safe and ethical principles.
Furthermore, as the need for classic administration fades into history, it is those people with the skills to tackle these marvellous new technologies who will benefit from a slightly longer career. For as long as that might last, this will be valuable knowledge indeed.
Needless to say, there must be conflicting opinions on this subject and this is where I turn it over to you. Am I right? Should LPI follow a path to Artificial Intelligence and Machine Learning Certification? The first one could be AIML-1 in the spirit of past course naming conventions. Perhaps I've read the tea leaves wrong and the age of human admins is far from over. Either way, I open the floor to you and look forward to your comments.