The History of the Future of Cybersecurity & Information Warfare Education

Original Publish Date: July 2022

The world is currently lacking some 2-3 million cybersecurity experts. That number is expected to double shortly, as more cyber-kinetic and anthro-cyber-kinetic (people, cyber and physical) systems are productized for industry and mass consumption. 

At the same time, cybersecurity practitioners have become laser focused on insanely complex, highly verticalized, specific subsets of our industry. Rough translation: we tend to train and hire skill-set specialists versus adding security generalists to the mix. Security professionals often have trouble talking to each other because so many specializations use different nomenclatures, terms, and acronyms; dozens of technically disparate lingua francas. We have forgotten about teaching strategic generalities and interdisciplinarianism. Overspecialization and isolation from other technical fields are two clear recipes for failure. 

To compensate for four-plus decades of an abdication of generalist approaches in cybersecurity education, I propose adding a triad of requisite base knowledge to the field: Engineering, History, and Humanity. This proposed triad methodology is aimed at all level of EDU: Professional development, operational and officer military specialties, secondary and advanced computer science curricula, certification authorities, educational institutions, trade groups, and organizations who want to improve their posture by encouraging interdisciplinary synergy amongst security professionals and practitioners. A broader common knowledge-base for deeper conversation and cooperation with other related disciplines gives us greater chances of progress and success in our mutual endeavors. 

The goal herein is multi-fold: a broader interdisciplinary generalist approach will yield more opportunity for discovering new approaches, tools, techniques, and hopefully, abstractions for anthro-cyber-kinetic systems, which are inherently interdisciplinary. We cannot, with absolute surety, predict the future. But, when security practitioners have a deeper and wider toolkit at their disposal, they gain advantage when dealing with the unknown effects of future-tech development, integration, and deployment. Finally, I believe in Aha! moments, when brilliance, inspiration, and perhaps even genius, unexpectedly manifest themselves in a game-changing zeitgeist of a particular field of thought. 

This paper examines a variety skills and knowledge that, of course IMHO, will greatly assist the professional and benefit the cybersecurity field as well, by promoting higher levels of abstraction versus exclusively rewarding highly specialized knowledge and practice, which has failed.

Genesis

It all began in a bar. Well, at a colleague’s exclusive Scotch & Cigar party at RSA 2018. (For the record, I hate both Scotch and cigars, so I was reduced to bringing my own red wine, suffering the dagger-like stares from my colleagues who saw me, clearly, as a heretic.)

That evening, in a dark corner, the celebrated Dr. Eugene Spafford (Spaf) and I were chatting about aspects of my analogue security work. I bemoaned a lack of interdisciplinary knowledge in our industry because I found it difficult to explain many analogue engineering concepts, as they were based on a variety of technological contexts. They were entirely new to so many people: so old, yet so new. 

In response, he suggested that I submit a paper to a (then) upcoming educational conference and put forth my ideas. I am not an academic, and my submission failed in dozens of ways to meet the necessary criteria of this formal event; rejection had been anticipated, but Spaf got me to organize my thoughts.

Based upon dozens of talks I had given on Analogue Network Security, I found it necessary, of course, to write a book by that title, to fill many knowledge holes that I had considered to be fundamental. My assumptions about fundamental interdisciplinary knowledge were completely off base. I was wrong on that point; oh, so wrong. Never assume, but, in my defense, it was intended to show respect to my peers.

I asked the experts; friends, colleagues, and audiences around the world. For example, less than 1% of fairly technical audiences knew who Claude Shannon was, much less his foundational work on Information Theory. Similarly, cluefulness about the basics of systems feedback and feedforward signaling in ICS was virtually nonexistent. Bayesian probability, systemic stress analysis, the basics of ML/DL (knowing there is no such thing as AI), out-of-band controls, and so much other foundational knowledge was significantly lower than I had expected. (But, then, I can’t code to save my a**, well-proven over many years of failure.)

The cybersecurity field no longer lives in the isolated world of source code, TCP/IP, AES, IPS, SIEM, SOAR, or any other of the myriad acronyms that populate our discussions. The concept of cyber-only security should have, IMHO, already been given an RIP plaque; an overdue death for sure, by any reasonable measure. Nothing lives in isolation, yet we still tend to verticalize. 

What has changed? 

The convergence of several disciplines, notably, ML/DL (Machine Learning/Deep Learning, big data, etc.), quantum mechanics, robotics, nanotech, materials sciences, and neuroscience (et al) has fundamentally changed our CPU-centric industry into one where Anthro-Cyber-Kinetic systems are the rule: and it’s already happening faster than the vast majority of people realize.

Folks who have narrow IT or cybersecurity skill sets, just as was true a century ago with mechanical skills in the industrial age, will be relegated to the lower rungs of future success, lacking the interdisciplinary awareness to provide vision and leadership in the coming years. The era of hyper-specialization in cyber-security must fade away quickly if we are to make serious progress in all three domains of security: cyber, physical, and human. 

With encouragement from my professional colleagues and friends, I decided to attempt to outline what I considered to be a worthwhile suite of generalist awareness skills that are desperately needed, as evidenced by the dearth of interdisciplinary cluefulness I found almost everywhere. Our educational system has shifted from generalism to specialization, without even being noticed; and because of that myopic trend, we suffer greatly.

I remember Science 101 in 2nd grade. I remember a lot of 101 classes that were meant to give students ‘A Clue” about varied topics, but over the years, the pushback for hyper-focused professional training and development won. Some of you might appreciate the parallel to the Copenhagen Interpretation of Quantum Mechanics in 1935.  I want to reintroduce a strong interdisciplinary core of knowledge for the Anthro-Cyber-Kinetic Security industry.

And what did Dr. Spafford do when I ranted about this at that wonderful open-air venue in San Francisco? He told me, “Write it down.” 

So, I did. Thanks, Spaf!

As befitting my first career, let’s talk Sex, Drugs, and Rock’n’Roll.

The Cybersecurity Triads

Yes, it’s always about Sex, Drugs, and Rock’n’Roll. 

 “A priest, a rabbi, and an atheist walk into a bar…”

 “Reading, writing, and arithmetic…”

All three of these phrases apply the Rule of Three to make a point that easily resonates with people. 

The aptly named ‘Rule of Three’ is part of our inner- and outer- language, writing, public speaking, scripts, and jokes; it’s part of our music, 3-act plays, films, and art; The Rule of Three is a big part of how we think, make sense of, and cluster information. In fact, human DNA is based upon threes – triplets. And that’s the truth, the whole truth, and nothing but the truth.

Writers, for example, often use adjectives in groups of three in order to emphasize an idea. In my 1992 book, “Information Warfare,” I used the Rule of Three to taxonomize the three classes of information warfare: Class 1: Personal; Class 2: Corporate; Class 3: Nation-state and NGO. Simple.

In the physical form, threes are inherently synergistic and by adding dimensions, greater truths can be found. What is stronger, a 2D-triangle (3D-tetrahedron) or a 2D-square (3D-cube) when you apply pressure to one of its vertices? Buckminster Fuller’s synergistic revolution is based upon the ‘tensegrity’ of threes in nature. Consider that the topology of a 4-dimensional hypertetrahedron is an analog of a 5-node full-mesh network (useful for examining DDoS/Spam and similar upper tier hostilities). I find that first thinking about ‘stuff’ in threes provides a solid foundation with plenty of ability to add more complexity, as needed.

I have been using security triads (Rule of Three in action) for lots of decades and they have proven to be terrifically effective at visually conveying strategic concepts. 

The first security triad is the age-old C-I-A. Using the Rule of Three, Triads are a good tool for  security awareness, and as means to organize (taxonomize) security relevant thinking. The Domains Triad, (Physical, Cyber, and People) has evolved into Anthro-Cyber-Kinetic modeling, the basis of the vast majority of technologies with which humans interface. One of the beauties of triads is that all elements are able to ‘touch’ each other. We can then add weighting (the relative of importance of each component within a group of components) to visualize the interdependence of each one.

The Many Lives Triad, (Personal, Professional, and Mobile) clearly shows the overlaps in the 21st century, both technologically and socially, often creating a conflicting mish-mash of priorities we see everywhere. Within an organization, our triad of total employee population (Users, Execs and Techs) allows overlap and integration versus verticalization, and then we can add weighting. I’m viewing this from 40 years of trying to improve security behavior between humans and technology in all sorts of organizations, each with unique weighting factors.

Consider examining a security culture within an organization by using the Processes, People, and Technology Triad as a prism. New answers will appear. Then, focus on how much weight is put on the Processes (vs People and Technology) that enable any operationalization of any task. You can view this with an ROI view, a security incident response view, or any other organizational function, and then overlay the fourth dimension, time. 

The operational (or any process component) can also be assigned by the actor, or Subject in a system. (Feedback is another issue.) As automation dominates, safe-guarding our environment requires that we understand the decision-making structure and ultimate arbiter and bias.

Is a particular process controlled by a Human (manual) or is it purely automatic (without any human intervention), or is there a percentage of process hybridization, to realistically balance resources, time, and risk allotment? I created this one as part of a Detection-Response matrix (as in Analogue Network Security), to achieve minimal Event Exposure time

In true anthro-cyber-kinetic fashion, I used a triad to demonstrate the relationship between digital and analogue processes with respect to time.

For those familiar with SCADA/ICS, the relationships between continuously variable and stepped measurement over time is fairly obvious, but certainly not prevalent in traditional cyber-security models.

Finally, for fun, and probably of use only for US-television-comedy fans, is the SNL Triad for strong, easy-to-remember password creation. This still is applicable under the current NIST advice

When writing my history of the future of cybersecurity education notes, another triad almost immediately fell into place. I have come to the firm conclusion that we must integrate our current cybersecurity training with additional overarching skills from three disciplines: Engineering, History, and Humanity. Certainly, something of a return to generalism in this age of specialization (or over-specialization).

I am often asked about the biggest cyber threats we face today, tomorrow and further into the future. I chose to reach back in time; back in my early cyberwar days (late 80s and early 90s) and retrieve what I told governments, military, and any global enterprises that would listen: “The greatest threat we face today and in the coming decades is apathy, arrogance, and ignorance.” And that, indeed, is a people problem, that the Future of the History of Cybersecurity Education will address.

Engineering

“Things” are designed and built by engineers. Physical “things” are constrained by the laws of physics, which must be dealt with in the design and construction. (Don’t go all quantum on me, now. We can always meetup at a quantum conference.) “Things” must be tested, stressed, measured, and then repeated again to insure reliability (measured), safety (approximated), and failure-mode (exercised), among other chosen criteria. An OODA Loop, if you will.

In our world, such testing methodologies are simply not common. We can do better.

 

Left: Modelling cyber-conflict optimization using OODA loops. Right: Applying the same model to AI with bias neutralizing function.

The principles of acoustic, mechanical, and electrical ‘networks’ are analogous to a generalized approach to stability of a given ‘circuit’. (Electrical circuits, piping systems, and electro-acoustics are all ‘networks’, and can be analyzed at varying levels of granularity.) 

The Netherlands’ massive country-wide Delta Works is an ideal model of integration of vastly different engineering skills into one project. The complexity of systems and systems’ integration (“My code works, it must be the other guy who broke it…”) is often a failure in process due to a lack of awareness or application of non-computer-science-specific engineering principles.

Consider the cyber-kinetic amalgamation; from robotics to drones and future intelligent autonomy, the cyber-security professional should have some fluency in, and a strong appreciation for, the other engineering disciplines used in any system, and especially those that are already here and now!

The first leg of the triad to support the History of the Future of Cybersecurity Education is a suite of comprehensive training to give the student a solid grasp on the basics of non-software engineering, networks (abstract), process control, Newtonian physics, electronics, feedback, analysis, systems architecture, interdependent components in systems… and the list goes on to include Boole (logic), Shannon (information theory), von Neumann (digital computing), Turing (processing), Weiner (cybernetics) and more as we will see in the next section of my idealized EDU triad. I don’t propose creating experts in all of these varied topics. I do, however, hope that our cybersecurity experts will develop a reasonable technical conversational fluency in myriad disciplines: Synergy is an unpredictable miracle. It’s free; let’s take advantage of it.

We tend to avoid teaching complex systems; much more comfortable, we are, in our little silo, ignoring the non-intuitive fact that the entire internet, global telecommunications system, and every enterprise and home-based network, in addition to billions of endpoints and trillions of sensors, is a closed system. How about self-referential scaling networks, working in between whole number integers? Think fractals and how networks map against nature! Think big. Think small. Think synergy.

We all speak nano, but who understands nano, memristors, MMSs, and the associated good, bad and ugly of working in billionths of an inch?  What about Bayes and Boole, the geniuses behind logic and probability, the two fundamental theories differentiating absolutism and probabilistic behavior?  Cybersecurity without those basics is like English with only 8 letters, instead of 26; anemic at best. 

With a longer view, let’s add a basic understanding of data science, computational biology (systems), and a dash of quantum (yeah, mind-spinning WTF tech that actually works, but we can’t explain). Your brain, now open, will find new answers and approaches, automatically.

Like it or not, AI is not here. Not real. Yep, it’s fake. Sorry. Are you fluent in the distinctions and theories of AI vs ML vs DL and the hundreds of permutations of this burgeoning discipline?  Are you listening to vendors’ descriptions, or should you study on Coursera and invest the time to become at least conversationally fluent so you can ask the right questions and have a decent AI-BS flag detector? Functional versus ethical bias in anthro-cyber-kinetic systems is a potential game stopper for much of the vision of ML/DL (AI), which returns immediately to errant conflation of probabilistic (analogue) versus deterministic (binary) decision making and Trust vs. Risk. Shouldn’t our designers and inventors have some foundational knowledge before constructing a solution that might be doomed from the start?

Now, IoT, IIoT is getting competition from IoB – the Internet of Bodies, where the body is the source of sensors, the target of responses, and the actual communications infrastructure backbone itself. Want to secure that?

Ignorance is never a good excuse, and our industry’s baselines of knowledge must be raised to include at least what is already on the exhibit hall floors!

Finally, we teach and believe in digital binaryism, when, as my recent book proved security mathematically, everything is analogue. The very few binary things you can think of, fail more than they succeed, for so many reasons. It’s like having the cure for cancer, but keeping it secret because showing it to the world implies past failures.

Which means we should be embracing failure in our schools and in the workplace. Do you want to hire the person who claims to have never failed, or the person who has suffered and survived and overcome dozens of calamities in their career? No-brainer, to me. 

But think a lot more than just managing a firewall, a job that will become the equivalent of hamburger flipping in ten years. Let’s get some engineering basics under our belts. Study on and study well.

History

“Those who cannot remember the past are condemned to repeat it.”

George Santayana (16 December 1863 in Madrid, Spain – 26 September 1952 in Rome, Italy) was a philosopher, essayist, poet, and novelist.

Many security professionals bemoan the latest and greatest next generation hardware or software “Solution” with varying degrees of indignation or indifference because after some fifty years of practicing cybersecurity, it’s just getting worse. The bad guys are winning. No, we are letting the bad guys win. (Think apathy, arrogance, and ignorance that dominate our industry failures.) We see the same old problems and weaknesses wrapped up in metamorphed code, protocols, or attack vectors, which yield ineffective rehashes of solutions known not to work – yet eminently ‘sellable’ as the NextGen answer. Bah, Humbug!

Rediscovering the same thing and redressing it under a new cloak with every VC-cycle is not working. What would happen if we seriously taught cybersecurity students about computing and information history, through the lens of an historian first, and a geek second?

Was the Antikythera Machine a computer? Why is the Abacus potentially superior to a calculator? Moveable Type? What technology was used for the 1890 US census? Hypothetically, if Babbage had had better metal-workers, how would the world be different? Historical context needs to be part of the critical cybersecurity professional’s arsenal. A little Boole; some Bayes is a must. Yes, Leibnitz, but not too much. How is the counting method used in China superior to the Western approach?

Then along comes rudimentary information theory, first thought of as entropy, then formalized by Shannon in 1948. The incredible history of technology process, from the 1890s on, provides us a roadmap for what worked, what didn’t work, and what should have worked – but didn’t and why. (Think positive feedback as in the Beta/VHS war. Or Russian-style Information Warfare. Or Tesla’s 1898 Earthquake machine demonstration.) 

The design goals of TCP/IP over the years, what were they? The many security models that worked, but never got popular: why? The standards proposed over the years; the trials, the tribulations, and why we are where we are now. Avoiding the same mistakes over and over again is essential to the integrity of the next generations of cybersecurity pros.

I ask audiences about the 1972 Anderson Reference Model, which we still use today, and I get blank stares. People have heard something about the Orange Book and Rainbow series but have never read them or understand their contexts or potential applicabilities, if any, today. And that’s about it. 

How does cybersecurity map across relevant military history, e.g., deception, time analysis in conflict domains? Why were women the dominant gender in computing until 1984?  Some math history would not hurt, to map interdisciplinary progress made in so many fields and enhance their complementarity, vis a vis, synergy. Yes, study Buckminster Fuller, too.

Point made. Our industry has a rich and storied past, that clearly should make history the second leg of the EDU triad and part of the vocabulary of every serious cyber-security professional. We should all know what has been done and learned in the last centuries (science), since ~1830s in computing, cryptography, and more specifically, infosec since the mid 1960s, (no disrespect to Hedy Lamarr; what a pioneer!). There’s a lot to learn, synthesize, and integrate into better solutions than we have to date. 

Maybe then, we can create some truly new and novel solutions, instead of repackaging the old ones under the banner of Next-Gen BS.

Which brings us to the third foundational leg of my proposed supplementary cybersecurity-education triad, People; carbon-units.

Human

The brain is a computer, sort of.

It’s the greatest averaging machine (yes, it’s analog) we believe the universe has ever created.  It is also mistake-ridden. Its memory circuits are so-so for most people, and alterable by both benign and hostile external influences, much less internal biases and inconsistencies, exacerbated by the passage of time. Our bio-sensor networks are inherently flawed, minimal 100msec+ latencies abound, requiring us to redefine human perception (Detection) and Reaction efficacy as we interface with technology. 

Perception is reality; or is it? How does our nervous system’s sensory organs allow external forces to influence our opinions, behaviors, detection, and reaction mechanisms? Our nervous systems are hyper-plastic, adaptable for survival, and our reaction systems are reflected in jet engines and the Analogue Network Security models with detection in depth, which supersedes the power of an over-hyped, underperforming, defense in depth. 

Unless we understand how the human sensory processing systems work, we will continue to design systems for the computer (et al) instead of the user, and the results are self-evident. We are all prone to high error rates, misinterpretation (think magic and illusions which baffle us), and time-based degradation. 

Do enough cybersecurity professionals have the basic knowledge to discuss how digital systems might map to the brain, which is powered by a mere 20 watts of power, about six orders of magnitude less than our best digital equivalent? Or how faulty attempts at “AI” can send security efforts in a mindless and harmful tailspin? Show some respect for nature, and learn! 

Our brains are the weakest security components in all anthro-cyber-kinetic systems as the number of daily interactions and reliances between man and machine skyrocket. We are the biggest fault with HIDs (human interface devices). Neuroscience research over the last decade has upended our prior beliefs with new paradigms, which offer new approaches to improve cybersecurity, awareness, behavior modification, tech-user experiences, and feedback loops in the human domain. Without humans we have no IT industry, yet cybersecurity professionals are painfully weak in all the aforementioned areas, even at the conversationally aware level.

Significant percentages of cyber-attacks begin with some form of social engineering, also known as Hacking the Brain. Instead of misanthropically blaming the ‘idiot-end-user’ as an easy scapegoat, research is showing a greater appreciation for those subtleties, which influence human behavior. Our industry should be thinking ttat way, too. 

Portions of the IT industry, Social Media platforms, certainly Facebook, Twitter, Instagram and the like, are the biggest ‘drug’ ‘distributors’ on the planet. (You can compare to the nicotine delivery industry.) The micro-reward process on social media triggers the release of a wee-bit of dopamine for every ‘like’ and positive feedback reinforcement. Ergo, the tremendous rise in mobile device addiction: looking for that next ‘fix’. Candy Crush and Nintendo: designed to be addictive. As should be the human element of good security hygiene. But, again, we practice so little, because the cybersecurity pros don’t have enough interdisciplinarianism or generalist knowledge.

Some phishing, vishing and smashing (~70-90% of initial data breach vectors attack the human) are designed to identify your particular dopamine trigger; from sexy pix to greed to self-help… and with the added capability of micro-targeting audiences, this is the weaponization of the interface between man and machine. Other social attacks target fear, impulsiveness, greed, and a host of other human instincts and frailties. We need better approaches in our defensive toolbox to defend against these and future similar attacks. 

It’s about people. It’s about adapting the interface between silicon and carbon-based systems. It’s about making them work together in a cohesive iterative, OODA-like security system, which will yield results never seen before; all in an effort to give cyber-security professionals new awareness, conversational fluency, and a serious appreciation for interdisciplinary implementation. The results will be astounding, I guarantee, because, we have never done it. 

We are stuck three generations back in our thinking, and we suffer the results of our own apathy, arrogance, and ignorance.

What about ethics? Ever attended one of my Trolleyological Conundra sessions? They expose how our internal human biases end up being programmed into life-death systems. Those alleged AI system are, generally, deep neural networks and no one knows how they work. The ML/DL system can’t explain how it arrived at its answer. And it has no memory. Yet, headlong we go into that unknown realm, as we attempt to electronically duplicate a human brain… mostly unsuccessfully, despite vendor hype to the contrary.

Our industry is horrendously weak and short-sighted about what kind of people we will allow into our hallowed halls and ivory towers of omniscient security bravado. What could the Rain Man possibly bring to the party? Social awkwardness is tough when meeting the HR-Person for the first time. We are ‘all on the spectrum’, and we all have unique personality traits, but to out-of-hand dismiss those that might make us uncomfortable or not neatly fit into our particular operational paradigm is a failure of our understanding of what it means to be human. Please take a look at the RSA talk, Hiring the Unhireable, for some clues as to what we can do. (https://www.youtube.com/watch?v=C_9cdzLi6-o)  I am pleased that finally, inchoate efforts are showing considerable success. Hire more “unhireables,” please. 

Cybersecurity professionals must be inside the head of their audiences, their users, and any carbon-unit that interfaces with technology. Human behavior is at the core of so much insecurity (in both senses), and since the brain is our detection-reaction system and is integral to all cyber-ish (and anthro-cyber-kinetic) activities, understanding the importance and implications to our security equation is as essential as engineering and history.

Movin’ On…

After thinking about this for years, I am firmly convinced that these missing pieces of cybersecurity-education together can be the beginning of a discussion to restart our industry, evolve it away from the antique binary foundations we still rely upon. 

Engineering. History. Humans. It’s a triad for the future of cyber-security education, adapted to the anthro-cyber-kinetic world we live in. Is this idea complete? No. It’s a thought piece to get the ball rolling in a new direction; an imperative to try something different. Not a true moon-shot, but in our field, we can and should call these shots.  At least try.

When I give talks on this topic, when palpable excitement fills the room, ideas abound. Folks throw them at me faster than I can respond. With relish and support of the overall approach to a different future, not the SOS we have been practicing for fifty years, I encourage non-linear responses:

“Add art, for an appreciation of math and creative thinking, the processes involved and the different kinds of folks who populate our industry.”

“Learn better communications skills so geeks can talk to non-geeks in terms their audience will ‘grok’. We are talking past each other these days.”

And finally, seriously finally, education should be a life-long endeavor. Things change at warp speed, and if we fall behind, catching up again is more and more difficult. Much of what I am talking about could be part of our industry’s continuing education efforts. It could be part of higher education curricula. It can be an advanced certification program with many branches, allowed to evolve instead of metastasizing into useless antiquity … a mere few years from now. Our notion of required complementary knowledge must evolve as does technology, with a focus on security.

Will these thoughts or some descendant framework become part of an industry effort? Or become part of some credentialization requirements? Or integrated into Ethical Hacking courseware? I hope so, with some industry leadership and vision from the global cybersecurity community. 

We need to begin somewhere. Sometime. 

Who’s willing to stand up to take the lead so we can at least try?

Previous
Previous

C’mon MetaZuck… You Gotta Do Better and Get Real

Next
Next

What Scares the Livin’ IT Out of Me (GFirst DHS Keynote, 2011)