Episode 10

Bridging Human and Machine through Data Science with Colonel David Beskow

Published on: 7th November, 2024

Join us in this episode of Inside West Point: Ideas that Impact, where we welcome Colonel David Beskow, Academy Professor in the Department of Systems Engineering. COL Beskow discusses his extensive experience and insights on bridging the gap between humans and machines using data science, particularly in cyber, intelligence, and special operations.  

Learn about his fascinating journey from an infantry leader to the Chief Data Scientist at Army Cyber Command, his current role at West Point, and his cutting-edge research on generative AI, machine learning, and drone warfare. Get an inside look at the importance of becoming a data-centric force and how the U.S. Army leverages these technologies for operational success. Don't miss this insightful conversation on the future of data science in military and defense. 


  

00:00 Introduction to Colonel Dave Beskow 

00:12 Beskow's Background and Expertise 

01:55 The Importance of Data in the Army 

04:55 Challenges in Data Collection and Literacy 

10:16 Drone Warfare and Autonomous Systems 

16:50 Social Media and Bot Detection 

19:48 Generative AI and Future Challenges 

22:53 Understanding Synthetic Data 

23:28 Generative AI and Historical Context 

25:31 AI Hallucinations Explained 

26:26 Generative AI in Military Operations 

27:41 Operations Research Center at West Point 

29:13 Current Projects and Research 

33:22 Enriching Cadet Experience 

36:54 Personal Insights and Background 

41:44 Rapid Fire Questions 

43:46 Conclusion and Farewell 

 


Loved this episode? Remember to rate, review, follow, and share this podcast with others.      

 

 

Learn more:       

 

Connect with us:           

 

Credits:     

  1. Guest: COL Dave Beskow: (https://www.westpoint.edu/research/centers-and-institutes/operations-research-center) 
  2. Host: Brigadier General Shane Reeves: (http://linkedin.com/in/shane-reeves-7950a31b3)    
  3. Recording: West Point Public Affairs-Visual Information    
  4. Production: Growth Network Podcasts: (https://growthnetworkpodcasts.com)  
  5. Publisher: West Point Press (https://westpointpress.com)    

  

   

  

This episode does not imply Federal endorsement.  

Transcript
[:

Welcome back today. We have Colonel Dave Beskow, Academy Professor in the Department of Systems Engineering, who's here to discuss how we bridge the human and the machine through data science. Thanks for joining us, Dave.

[:

Good to be here.

[:

So let me tell you a little bit about Dave. He currently serves as the director for West Point's Operations Research Center in the Department of Systems Engineering. He specializes in applying data science to cyber, intelligence, and special operations. Commissioned in 2001 from West Point, he served as an infantry leader in the 82nd Airborne Division and 4th Infantry Division and was deployed to Afghanistan and Iraq.

From:

For his PhD at Carnegie Mellon University School of Computer Science, he applied machine learning algorithms for information advantage. A graduate of Ranger School and other military programs, he holds a BS in civil engineering, an MS in operations research, and a PhD in societal computing. All right, Dave, so let me start with the most important question. As an expert in analyzing social media, how much time a day do you doom scroll? And how would you rank my spirit videos?

[:

Sir, to be honest, I've studied terabytes of social media data, but I prefer not to be on it too much. Ironically, my department has me as communications lead. I usually delegate that to other folks or to ChatGPT to help out with our communication.

[:

So you can't tell me if I have a future as an influencer,

[:

We've run a survey. I'm not sure if you want to see the results.

[:

No, actually I don't. Because I want to believe what I want to believe. I don't want data to support, like, to undercut what I believe. Alright. So, let me start with something I think is simple, but maybe not. So, what is data?

[:

That's a great question. Even though I've worked with data for probably the last couple of decades, it's one that we just wrestled with recently.

My team supports a variety of operational project sponsors, and we recently partnered with the 18th Airborne Corps, specifically General Chris Donahue leading the charge there at Fort Liberty. So we served down there with myself and several members of my team to support their warfighting exercise.

We're on the ground with a very talented staff, 18 hours a day in the trenches with them working on that. His primary focus for his warfighting exercise over a two-week period was what he termed live data. The badges for the entire 18th Airborne Corps staff had the famous emblem of the 18th Airborne Corps, and under it it said live data.

So that was their whole pursuit: how do we use live data to inform operational understanding and decision-making at a core level. We wrestled through that. We did a bunch of things. A lot of things are still tough at that level, and so I came back trying to wrestle with what is this live data that General Donahue is really trying to get at.

And there's tons of data out there from images to videos and everything else, but when it comes down to this live data that I think our senior leaders are looking for, I think it's very transactional. They're looking for what is the transactional data that really drives the heartbeat, whether that's the location data in operations, the maintenance data that this maintenance action has happened at this date and timestamp.

That event data or transactional data is really this live data that I think they're looking for, and that event data needs to be enriched by what we call knowledge data or data about the entities that are in that transactional data.

[:

So why does General Donahue, or the entire Army, care so much about data?

[:

This has been a priority for our force for a little while, and it's risen to the very height where our Honorable Christine Wormuth, our Secretary of the Army, now her number two priority is to be a data-centric force.

I think she wants to go there because she sees this driving our decision-making in the future, and really war is about decision-making and we think with the data we can start by describing the situation for our leaders to help make decisions. We think these algorithms and the data that underlie them will help us be prescriptive. This is the decision you should make at some point you may even let it go autonomous, whether the machine is making the decision in certain cases on its own.

[:

I've been in a lot of these conversations and you hear a lot about the ubiquity of data and that it's everywhere. I do want to drill down into how data helps us execute operations and warfare. As you point out, we aspire to be a data-centric army.

The first question I'd ask you is, do you think that is something we should be aspiring to?

[:

Absolutely. I mean, it is. In today's world, we can see it from industry, we can see it across the world, that it is going to drive a lot of things in our army. There's a human aspect that will never go away.

This war is a very human endeavor, a conflict. But it's increasingly becoming very, very technical. I've worked with a number of leaders in a couple different foxholes on how do we become more data-centric. And usually, we walk through three things to get value out of it.

One, we need to have data, and that seems like it's, you know, trivial in its way, but there's a number of times we've been focused on problems where leaders think there's data and there's really not.

[:

So that undercuts the idea that data is ubiquitous and everywhere. You're saying there are situations where there's a data desert.

What would be an example of that?

[:

A couple of examples. One is logistics. Logistics is one of the hardest things for us to get good data on. My canteen doesn't have a sensor on it that's beeping on a network that lets you know where it sits.

Neither do the magazines in my infantry, you know, M14. So, the only way you're going to get that data is that platoon sergeant calling back and saying, "My platoon is black," however that platoon sergeant is defining black. So that's not automated, that's not live data for General Donahue.

The other thing we did is some data is out there, but just because the world has it doesn't mean the U.S. government has it. And so there's a number of times where either corporate entities have data that leaders think we have. And sometimes we have data that's just not digitized.

One of my funniest stories is working in an intelligence organization for a rather sensitive organization. They came to us with a problem and they said, "Hey, we have this problem and we have this data." And I said, "I have algorithms to solve that problem." And they said, "We have data." And I said, "I have models."

They said, "Data." And we have models. So finally, after a few weeks, I said, "Hey, I need to see your data." So I went to this organization. That's interesting because what you're talking about is a bit of data. And I was sitting there, I was expecting them to walk out with an air-gapped computer with their data on it for this sensitive mission.

They rolled out a filing cabinet and I looked at them and said, "Well, your definition of data is very different from what I find this data." And I sat there for days trying to digitize their filing cabinet.

[:

So it sounds like we're talking about data literacy.

Can you, I mean, first off, what does that mean? What does it mean to be data literate?

[:

I think data literacy means understanding the data we need. The Operations Research Center has been partnered with the ASALT, which stands for the Assistant Secretary of the Army for Logistics, Acquisition Logistics, and Technology.

So with Major General Barry there, we're partnered with his team. As they look at data inside the entire acquisition community, they've asked us, and they're trying to build a more data-centric force, they've asked us to look at how do we assess that? How do we go to the acquisition executive and say, "Hey, we're 50%, are we 60% data-centric?"

How do we do that? I think to be a data-literate Army, we need to figure out what is the data we need. What is the soldier table that contains each soldier and attributes about it. What needs to be in that table? What is this transactional data that you should have on those soldiers?

Whether it's an OER or their leave form or what other action is transactional that we need on this entity that we call the soldier—what are the attributes? So we need to define that and figure out how close we are and how relevant it is. And then data literacy is understanding these basic models that are driving this.

People need to understand ChatGPT today. They need to understand what supervised machine learning is, and the idea of labeled data. Because if we don't understand those things, then we're not going to see the opportunities, for example, to have Intel analysts as part of their workflow to create the labeled data we need for this.

[:

What is labeled data?

[:

My PhD was in bot detection. So we developed machine learning algorithms to do bot detection.

Bots mean a variety of things in a bunch of different scenarios, but for social media, it's anytime you have an account that may look like a person, but a computer is creating the actions for it.

So take, for example, most people are familiar with Twitter or what is now known as X. On Twitter or X, you can post a tweet, retweet, like someone. You could quote a tweet. All those actions can be automated, so the computer could do it.

In order to do supervised machine learning for bot detection, I need someone to look at this social media account and say, "This is a bot, this is not; this is a bot, this is a bot, this is not."

And the machine learning algorithm is going to then look at all of the associated features of each of those accounts and then learn how to associate that to the label. But you need the label. So someone has to go down by hand and look at a bunch of things about the account and say, "This is a bot and this is not," and that's a lot of time to do that.

But, for example, in the Intel enterprise, we have 60,000 Intel analysts who are part of their workflows looking at data and trying to associate things we're interested in. If we can just have them apply that in a meaningful way, then we have the labeled data to be able to automate some of the things that are frustrating them.

[:

That's fascinating. Do you think the Army's on the right path to becoming data-centric?

[:

Absolutely. We're moving there. There are still challenges. We pull our hair out every day. Last couple of weeks we were working on logistics data for their entire fleet of aircraft, and we still can't figure out how to spell H-64, which is one of our attack helicopters.

So there's a myriad of ways that we spend two weeks trying to figure out different ways to spell H-64.

[:

Do you have a thought in mind when you would say, "Yep, we're data-centric," or is this a moving target? Are the goalposts going to continue to move as we become more reliant on data to help inform decisions?

[:

The goalposts are always moving. But I think as we look at, and really in this project we're working right now for acquisition leadership to help centricity, it's helping us think through these things of how we assess this and how we get there.

And I do think we as analysts need to figure out in a perfect world what is the data that we need to answer our senior leaders' questions that we get on a daily basis. If we could define those and build requirements for those data sets, we can start back into the systems that we have, from IPSE to our operational systems that help provide that data and see if we can get to that 80%, 90%, 100% solution.

[:

If you were a king for the day, what would you say we need to be doing better right now to become data-centric?

[:

Right now, I think, after wrestling this with ASALT and with the acquisition folks, is to help define that. Define what the data we want.

In the soldier table for the Army, what are the attributes? To create this transactional, this live data that General Donahue was striving for, we need to figure out what those things are, whether it's the location data, the logistics data—what are those, what are the attributes we need to get in there, and how do we get sensors, and hopefully try to avoid people in the loop as much as possible, that, you know, the platoon sergeant with the AM black on water. If we can avoid that, but when necessary, bring the human in to create the data.

[:

Let me segue into a different topic. Drones are a common place on the battlefield. In fact, if you're looking at Ukraine, it is almost a daily occurrence that you hear about a greater proliferation of drones, a greater usage of drones and how drones have absolutely changed the trajectory of that conflict and are being relied upon more and more by those belligerents.

And we know that drones aren't going anywhere. And we know we need to rely on drones and we need to train on them. What are some of the things we need to consider when we talk about drone warfare?

[:

Both in the Middle East right now and Europe, I mean drones are absolutely, and particularly in Europe, they're in Ukraine, they're changing every single day and we're watching this and it's scary.

I mean, you've seen the videos of the different drones and drone attacks on both sides of the war. You've seen the soldiers hunkering down trying to flee from these drones. There's an argument among some leaders whether this is, you know, a revolution in military affairs.

And I think only history will be able to tell where this goes. We're at a big moment as we watch particularly the Ukraine fight because they are very close to starting to allow the initial stages of letting it become autonomous. I think we're going to cross a big line in the sand when we do that, and other nations will be left to decide whether they follow suit.

What we're seeing right now in Ukraine is that one of the biggest weaknesses of drones is their tether. They're tethered to the operator, and that's very targetable in a variety of ways. Some of the drones also have a tether to GPS. And that's also targetable, particularly with the EW things that are available out there.

And we've seen that in PAIR and break the kill chain there for the drones. In order to remove that vulnerability, what they're starting to do is allow the terminal stages of the drones to be autonomous. The operator still chooses the target, but the terminal stage, when targeted by EW, is autonomous.

They're starting to cross that line now. The question is, will they take it a step further? Will they let the drone choose the target if the EW bubble becomes too large? So we're really just watching and seeing what that does, but it's absolutely going to change.

The other big technology that's really impacting drones is in order to let them go autonomous, you're going to generally, to choose a target and do that, you're probably going to want to put it inside a box. You're going to choose a box on the ground and a target that you know is only an enemy in that box, and they may allow it to engage that target in that box.

To do that, you can't be tethered to GPS because the GPS right now is the only way that drone autonomously knows it's inside the box. So one of the big technical challenges is how do you get an autonomous agent to know where it is on the ground without relying on a satellite connection.

[:

Is the technology moving in such a way that will be possible?

[:

I've heard of endeavors, I'm not sure how successful they are for true autonomy on drones. I think there's a whole bunch of policy values-based questions that we as a society need to ask, but technically one of the things they're trying to work through is how do you geolocate on the earth precisely without GPS.

[:

Yeah. I mean, there's obviously, like you said, some legal and policy and ethical questions that come with that. How do you ensure the drone is targeting that which is sent to target? How do you ensure that the principle of distinction is complied with—the obligation to distinguish between a civilian and a combatant and only target a civilian or a civilian object in the military objective.

But just in a vacuum, put aside the legal, ethical, and policy questions. Do you believe the technology will soon be available to allow a drone to target distinct from a GPS signal inside a box?

[:

Well, I think the GPS geolocation is to be determined. Can we do that? I've heard of some technologies.

I'm not sure. To visually identify a target with different sensors, whether that's infrared or not, it's pretty easy to distinguish certain things. Particularly, if you know a box only has enemy combatants, it's pretty easy to distinguish a person on thermal imagery.

Sure. It's really easy. And the machine vision is there to be able to determine and identify that target. Once you mix non-combatants, it becomes messy. But if you can draw a box that's only enemy combatants, you can get the drone to stay in there.

It's the technology over there to identify the human signature on a thermal image in day or night.

[:

You can think about worst-case scenarios though where it becomes untethered from that box. And then what happens?

[:

Those are the concerns—technology and ethics that we're wrestling, or the world's wrestling through. Particularly when those who are fighting for survival.

[:

One of the reasons I asked about the technology is because, as you point out, there are legal, ethical, policy, potentially constraints on how we would use this technology and our adversaries oftentimes don't have those same types of constraints. Do you see this in Ukraine, for example, the Russians being willing to do things more aggressively than the Ukrainians?

Are you seeing both parties just accelerating as fast as they can towards this autonomous future?

[:

They're both accelerating in different ways. The Russians are crossing lines in our nation, which they absolutely would not. You can see the war crimes and other things that they're crossing a number of different lines.

I think where we're wrestling right now, and let's go to the other side of drones. We're supporting PEO Missiles in space right now. They have our counter-UAS mission. And so we're figuring out how do we use AI to counter these. So one of the things we're developing AI models to increase the human reaction to that.

So we aren't—the national values is we're not going to let the weapon system pull the trigger. At least right now, that's our national value. We're using AI to help identify targets, but it's the person that's going to pull the trigger, particularly in CENTCOM AOR right now. Because of human processes and other things, we need the human to pull the trigger faster for legitimate threats.

We're trying to figure out how do we—sort of, we're not going to let the weapon system pull the trigger, but we do need to build AI and confidence in the person to give them more confidence that this is indeed a threat.

[:

This is interesting because the weakest link in this entire process might eventually become the human.

[:

Yes, sir.

[:

And if that's the case, then you have to adapt or die. Some of these things may be forced upon us where the battlefield may become so lethal and war becomes so fast that you won't have the opportunity to react and so that is where so many of these currently theoretical questions come up, but increasingly what you're saying is these are very realistic questions we have to tackle about where the human resides in this loop and where are the ethical and legal boundaries and how are those implemented, especially if you have an adversary who ignores those things.

So, you know, our annual theme this year is "The Human and the Machine Leadership on the Merging Battlefield." Let's start with the most common human-machine interface that we can all relate to—phones or perhaps more specifically social media. Can you tell us a bit about this research that you've been leading?

[:

I started my PhD working on analyzing social media, particularly we did bot detection. So bots, you know, some are good. So there are some very beneficial bots out there.

There are bots out there that are tied to sensors for earthquakes. As soon as there's an earthquake, it posts on Twitter the fact that there's an earthquake and warns large amounts of people who may be impacted or that there may be a following tsunami. Those are all bots that are designed.

If this happens, post this tweet, it goes out to large. So those are very beneficial and we need those. Some are used by corporations. Most of your news feeds are bots. Rather than having a person do that, they can create a bot to do that and feed their content out. Others, though, start getting more nefarious, and those are the bots that we start hearing about. So they start as just kind of spam bots, and then they can go to propaganda bots. It's pretty easy with a few lines of code to create a bot that will fan the flames of propaganda.

And they can get more sophisticated where you have almost conversational bots that are designed to insert into a specific group or a portion of society and begin manipulating their ideas or thoughts or trying to tie them to another group. Those are the more malicious bots that sometimes get focused on.

[:

So, would you consider when you have a collection of bots, would that be a bot army?

[:

It is. And there's, like in written papers, a number of nation-states who very clearly empower bot armies. Right now, when we talk about national values, the Department of Defense and from all my work with the IEC and U.S. Government, we do not have, as a nation, bot armies that are manipulating other societies, but we've seen other societies do that. I have lots of data, for example, on Iranian attempts to manipulate U.S. thought. For example, a big data set I have is from their push where they push hashtag #Texit or #CalExit.

So those are their endeavors to try to push California or Texas out of the union. Those are two biggest states by GDP, and it would change the balance of power in the world if either one of those states left the union. And so while it's a long shot hail Mary, they still will fan the flames and start putting what is very clearly Iranian efforts to use their bot army to try to fan the flames of #CalExit and #Texit.

[:

How do you see this going forward? How do you see this all evolving?

I mean, we know social media isn't going away. How do we respond to all this?

[:

Where it's evolving right now is ChatGPT has just changed it drastically. Prior to ChatGPT and generative AI, the algorithms couldn't create nuanced content. The bot armies could be automated.

They could push the content. But if you want nuanced content that you're going to try to connect with a specific audience, you had to have humans create that. There would be warehouses of folks trying to generate this nuanced content. That was one of the big limitations—trying to get nuanced content that you could then connect to your specific audience.

With generative AI, they can scale it. They don't need warehouses of humans creating content for a bot army to push. Now they can connect something that generates very nuanced content for a specific audience, and they can scale that. ChatGPT has guardrails. It's not designed to do propaganda, but there are some open-source models that don't have those guardrails. Those are what people are turning to to generate and push propaganda. We've done research here to evaluate those models and try to evaluate how well they could produce propaganda. We trained it on overt propaganda from some of our peer competitors and adversaries and then evaluated how well these things could scale that propaganda.

[:

So you're saying this problem is going to get worse. The use of bot armies for nefarious reasons and that generative AI is accelerating the capabilities of these bot armies, especially with a nefarious actor. I mean, is that true?

[:

Absolutely. And anyone working online now has to evaluate any type of content, whether that's semantic text content, or video content, or image content, whether this is real or generated fake. And that includes, you know, the content. And a lot of it now can be scaled, and it was very effective at scaling our adversaries' propaganda.

[:

So you just mentioned generative AI, ChatGPT. Eventually, these systems are going to run out of data, right?

[:

That's right. Most of these have essentially been trained initially on the internet. It's a large portion of the internet that they've been trained on.

And once we use it, it's not like there's multiple other internets out there.

[:

So you're saying that there is the potential to get to the end of the internet. A lot of people have been trying for a long time.

[:

There is.

[:

So the generative AI is at the end of the internet. They're at the end of the data that's available.

So what happens then?

[:

That's right. What they're trying to do is they need to create more tokens in the— in the language of ChatGPT's tokens is words or other types of tokens that are fed into them. They need to have additional things to do that. Some of the open models are doing is they're using other generative AI models.

Produce content that they are then trained on. So, for example, in the context of evaluating propaganda, the model that we were using designed by another country that we were evaluating didn't have any guardrails. As time progressed over about the 12-month study, we saw it begin to inherit some guardrails, and what we believe was going on, that we haven't been able to prove, is that they were using ChatGPT to create content that they were training on it. And by doing that, it was inheriting some of the guardrails that ChatGPT has. And so as we started trying to produce propaganda, it began giving us the standard text that usually comes out of ChatGPT, which tells us that we think that this other nation who is training this specific model is training on the output of ChatGPT.

[:

So what type of data would you call that? Synthetic? Is that synthetic data? Is that the same thing?

[:

It is. It's generative. Now, the thing with generative AI is it's not producing anything new. These models are not creating new events in history; they're taking everything they've been trained on, and that is what's coming out of them.

[:

Are they taking pieces of history and combining them to create something new? Or, I mean, is it an amalgamation of data to come up with something that's unique?

[:

It'll try to answer the question as best it can, and it'll make synthetic stories, and at times other things. But we also have to remember that these are all trained at a point in time.

So when ChatGPT first came out, for example, we talk about the history. These algorithms—generative AI—are trained at a point in time. They don't know anything after that point in time. There are ways that you can try to enrich them, or you can search and try to add some information for them, but if you use the default model without any search capability to grab new events, they don't know anything.

So when the first GPT came out, for example, we talked about the history. These algorithms, generative AI, are trained at a point in time. They don't know anything after that point in time. There are ways that you can try to enrich them, or you can search and try to add some information for them, but if you use the default model without any search capability to grab new events, they don't know anything.

nflict in Ukraine." It gave a:

And so there's new things happening in society, in the world, and everything else. And there are ways that people try to feed that in through search to try to add those in and bring that context into the response. But we have to understand that these algorithms were trained at a point in time, and their underlying core foundational model doesn't know any content or events that have happened since that time.

[:

If the generative AI is trained to a certain point in time, has the technology evolved to a point now where you could ask it a question and it might not have the information available to answer that particular question, but it can create synthetic data to answer the question you're asking?

[:

It's not creating synthetic data; it's usually searching in some type of database or some place to be able to bring the information in. Most of them have very large, which we call context windows, which means you can put a lot of text into your prompt. And then what they'll do is they'll try to take that context, put it before your query, give the algorithm the context or the point of the story we're in, and then your prompt is at the very end of that.

We were using the 18th Airborne Corps on the exercise. We had generative AI available to us, testing it out as part of the exercise. When you would use their generative AI, all of the war, fight, and exercise documents were available for it to search.

You would put your prompt in there, choose how many documents you wanted to search. It would take your document, search through all the warfighting documents, put that at the beginning of your search, and then you ask yours after it's given all the context of all the op orders and other things for the warfighting exercise.

[:

That's fascinating. During a previous episode, we had Colonel Chris Mayer talk about the intersection of leadership and artificial intelligence. If you haven't listened to it, I highly recommend it. I'm very biased, but it's a good one. So, in the episode, he talks about AI hallucinations.

So first off, could you explain to those who didn't have a chance to listen to that podcast? What is an AI hallucination? And then can you just speak a bit on the topic?

[:

Sure. So at its simplest point, an AI hallucination is when one of these algorithms produces content that is nonsensical or fabricated or just wrong.

And remember, what these algorithms are really doing is they're training on a bunch of different tokens, and they're just predicting the next word that we should be adding to put this body of a response together. They can be very sophisticated in that, but they are just putting these words together, and sometimes they'll fabricate pieces of that. This becomes important for us as we consider generative AI for operational purposes here in the U.S. Army. A summer or so ago, I supported the Army Science Board in evaluating generative AI for operational reasons. We looked at using it for intel. We looked at using it in the command post. And we out-briefed the Honorable Bush, our acquisition executive for the U.S. Army.

As we looked at that, we started playing with where could we use it in the command post? Can we use it on chats? Can we use it in op orders, orders of development, other processes? Currently, with what we evaluated, it's not at the point where it can create a plan for our leaders. Our cadets, we teach them to evaluate the enemy and the terrain and the weather to come up with a plan.

And it doesn't have a very sophisticated way to analyze terrain and weather and come up with a plan. So the plans are usually awful. It's seen, it is very clearly seen in a bunch of operators. So we'll give you a plan that's very similar to ones that it has seen, but that may not be the right plan for you today.

What it is doing better than our soldiers, though, is creating the op order boilerplate and what we call the coordinated instructions. It's seen so many orders out there that it puts a whole bunch of things that you may just not remember to put in there and you're, you know, at all the different boilerplates of an op order given your nuanced mission that you're about to do.

So we do see it being used increasingly. Whether or not it actually eventually comes up with a plan, we'll wait and see.

[:

Let me move on to a separate but related topic. Let's talk specifically about the Operations Research Center here at West Point. How long have you been the director?

[:

So I've been the director for just over a year. The Operations Research Center has been here for 35 years. It actually predates the Department of Systems Engineering by about a year, I believe. I had a chance to serve in the Operations Research Center as an analyst on my first tour as a junior faculty here and had a blast doing that and was excited to be the director now, leading a very talented team.

I think we have a very robust number of centers across the academy here working on a variety of different things. It's primarily focused on graduate-level research. It's primarily focused on relevant DoD and Army problems.

And so, what that allows us to do is bring some of the very talented faculty we have here at West Point, apply them to very tough, messy projects—Army problems—and let them work on that. It also gives us a bit of flexibility, as you know the drumbeat of the academy is the classroom. Our focus is that, but it's also sometimes hard to get away to be able to support things like the 19th Number of Corps warfighting exercise if you're in the classroom every other day or every day.

This gives this small center a little flexibility to surge and support General Donahue and others who have these operational problems that we are able to support.

[:

I've been emphasizing this constantly about how there's so much intellectual capital that resides at the United States Military Academy.

And it's really what makes West Point special is the intersection of academic and military expertise. What you're talking about is having some latitude to work on these long-term projects, which is so critical, as you stated, to the Army.

And in really the Army turning to West Point to solve some of these problems. So let's talk about a few of the specific projects that are currently being worked on. There might be some out there wondering why is this work being done at West Point? As you stated, it's been 35 plus years. But why?

[:

I think one is we have a very, very talented faculty here. Our junior faculty bring, um, almost everyone in my center is junior faculty, meaning that they have operational experience, they're wearing the uniform, they're usually anywhere from a captain to a lieutenant colonel. And that they have lots of experience with their branch.

They've just had a tremendous graduate program at a tier-one university. They've done some teaching to understand how to explain things. And now we can unleash them on these hard problems for the Army. It does two things. One, we as an academy are trying to use that talent to solve these problems. So my job, number one, is to solve hard problems for the DoD and provide value to the broader community. My second job is to raise up the next generation of technical leaders, and we're preparing them to go out and solve Army problems for the rest of their careers. And as we look at that second graduating class, these faculty that we launch back out in the Army is an important value that the Academy brings.

And I can think of a number of individuals that I've worked with out in the Army who are part of that second graduating class. One of these leaders that I've worked with recently is Lieutenant General Paul Stanton. I worked with him when he was the Deputy Commanding General there at Army Cyber Command.

He just last week took over for the Defense Information Systems Agency and is a tremendous leader. His time here, I think, was very formative for him and set him up for success in how he's leading a very technical job in our Army. Colonel Isaac Faber came out of our department, a tremendous leader who just took over three weeks ago as the director for the Army's Artificial Intelligence Integration Center.

He's really leading the charge for AI for our Army, and his experience both through grad school and through his time here, once again, was very, very formative in preparing this second graduating class to be the technical leaders of our future.

[:

As the dean, I also think about how this enriches the cadet experience.

Can you talk a bit about how not only does it create this incredible second graduating class that goes into the Army and ends up leading the Army in many of these technical spaces, but also how it enriches the cadets?

Let's dive into a few of the recent projects.

[:

We usually bring in fairly large projects.

I try to bring in a broad portfolio of problems that the Army has. We're working at AI to analyze threats, threat maneuver in space, and kind of what's supporting space domain awareness.

And so we're doing a number of algorithms to support Space Com and that. We're supporting both U.S. Army Pacific in an assessment of their operation pathways and Army Cyber Command, which is the Army Service Component Command for cyber.

We provide one of their biggest data feeds for information advantage. We're right now analyzing all news for the world in all languages—about 400,000 articles a day. And we have a whole machine learning pipeline that we put it through to enrich that data and offer up to a variety of military commands, primarily Army Cyber Command.

We're working with OSD looking at readiness.

So how do we measure readiness? We're bringing a bunch of data and algorithms for readiness and measuring readiness, not just for the Army, but for ground forces in general. And interestingly, OSD calls ground forces anything that doesn't fly or go into the sea. So that means cyberspace and everything else somehow falls into ground.

So we're trying to develop a readiness model for all of that. It's fairly complicated. And then another command that we just started working with is First Army Command. It's one that a lot of folks may not know about. They're located at Rock Island Arsenal in Illinois. They just asked us to work on reconstitution, which is one of their main problem sets.

So we're developing a discrete event simulation to simulate three main efforts. One, how do we go back and do the draft? How could we do IRR? How do we bring that back? And how do we bring back retirees?

[:

Do the retirees have to take a PT test?

Just out of curiosity.

[:

That's, that's the knobs on how much of basic training do you have to go through if you come back? And what, you know, is it a roll of the dice on you. That would be...

[:

Such a sad day. Can you just imagine when that retiree gets that in the mail and they're like, "What? Yeah. Are you kidding?"

[:

This is something we hope never happens, but if our nation requires it, we need to be ready to execute. The big question they have is how long can we get to numbers? We build the whole process into the simulation and figure out, should the retiree basic training be six weeks or eight weeks?

[:

How does all of this enrich the cadet experience? As you noted, our drumbeats are in the classroom, and our primary mission is to educate, train, and inspire the core of cadets. So how does this all translate and then facilitate that?

[:

Right. Really, I think the Operations Research Center just kind of overflows the classroom in a variety of ways. One is we oftentimes, as we work with our project sponsors, we can roll these into what in our department we call a capstone experience, which is a very formative experience for our cadets.

Both PEO Missiles in Space, Army Cyber Command, Space Com has said in addition to having faculty research, we want a line of effort that's just cadet research. And so by building these partnerships, we can then bring this into the classroom with capstone—two-semester capstone research focused on very relevant problems.

Right now I'm working with Army Cyber Command with a group of three very, very talented seniors or firsties. They're working with data that they've never seen before. They're working with a very messy problem. They're trying to figure out who the stakeholder even is and what that stakeholder's main question is, to dive in and get after this problem.

So that very much enriches them. And then oftentimes, our analysts in the ORSHA go back and teach in the classroom. Instead of talking about a boring academic scenario from some mundane task, they can bring very relevant stuff they were working on for Space Com or PEO Missiles in Space or other things that are relevant to the classroom. It just enriches our classroom experience.

[:

How do you see these cadets develop when you give them a complicated problem without a solution in mind?

[:

One, they just have to wrestle with it. How do I start tackling it?

And how do I, I mean, I have this elephant. How do I, you know, what's the first bite that you have to take? You have to just start somewhere and start wrestling through that. How do we understand, and we're standing on the backs of giants.

So how do we look at what others have done and bring that into this problem set and be able to use that? How do we iterate with a stakeholder to get after what is the real question here, what's the real problem, and how we can get after that? Right now we're developing dashboards for the IRS Cyber Information Advantage platform.

Who is actually the user? We always want to shake the hand of the true operator that's going to use this—what are their problems and how do we try to get value for the true operator, the user on this platform.

[:

That's fantastic because what you really just described is we're teaching them how to think through a problem, asking them to be creative and innovative and to find solutions,

all translates very well into success on the battlefield, especially in a complicated role like we live in.

[:

One of the biggest things I think we also try to do is, you know, I talked about three big things in the beginning. We need data. We need data teams. We need data environments to be able to put that data on.

ng computer pioneers from the:

She came up with the term "bug" to describe a problem with your code. But she said a computer will never come up with a new question. And so really what we're trying to also help cadets do is how to ask that great question. Because I think that's where true innovation is—it's the person who can ask the great question. We need technologists to build the data, to build the systems and environment to put that data in, to be able to train the team.

But the innovator is the person who can ask the great question with an understanding of what the technology is, and that's where true innovation happens. And we're trying to help our cadets be that innovator.

[:

Exactly. That's exactly right. So let's shift gears a bit and talk about you. So I find that our listeners are oftentimes fascinated to learn more about our faculty and what inspires them to do the work that they do.

So why did you decide to teach at West Point?

[:

I think when it comes down to it, it's two things. It's the mission and the people. So I think two things with the mission. One is being able to inspire, lead, and mold the next generation of technical leaders for our Army and for our nation is absolutely inspiring. To come here and do that is phenomenal. Additionally, I think to come here and work at—we just talked about a variety of problems and solving that broad breadth of Army problems—there are a few places that I could go that'd be able to touch everything from space to counter-UAS to reconstitution.

The ability to touch that breadth of problems and use technical skills that the Army's helped me to achieve and leverage them against those problem sets is just a lot of fun. And then it comes down to people. Both the cadets in the classroom and out of the classroom are a ton of fun.

I look at my left and right and I'm just humbled and inspired by the fellow faculty we have here.

[:

The follow-on to this is you're an academy professor, which means you are now what's called a functional area 47, which means permanent military faculty. You're a very talented guy and talent always has options.

What made you decide to invest your professional expertise and stay in uniform and be a permanent military faculty at West Point?

[:

I think it's a lot of the same mission and people—to be able to do that. There's no other place I could go to do both of those things with such a great team, with that breadth of mission.

And then, the next generation—it's hard to understand the impact of that until you look left and right at the classmates that I had and what they're doing now, whether they're leading in the 75th Ranger Regiment or elsewhere, and to think that these cadets are going to be the next people that do that and that really lead our nation both in and out of uniform.

[:

Tell me a little bit about your background, and how does that inform your approach to teaching and research?

[:

I started ten years in the infantry. Every time I step in the classroom is informed by that. I've had a chance to lead from platoon through company level.

In combat, I had a chance to be a platoon trainer at IABOLIC and lead in TRADOC, an infantry school. I took five platoons through IABOLIC, the 17-week P.O.I.A. that they have there. And so, every time I step in the classroom is informed by that experience. Once again, the Army is a very, even amongst the other services, a very human endeavor. This is about boots on the ground and it's very human. So understanding from my views of leadership to the views of soldiering are informed by that experience. And I think it's great to bring that, um, every class in the classroom.

And as you interact with cadets in a variety of arenas and places, being able to talk and bring those thoughts into it. I think then beyond that, my experience in the ORSA has allowed me once again to touch a bunch of different problems and a bunch of different foxholes and to bring those types of experiences into the classroom. We're not talking about some mundane academic thing that I pulled out of the textbook, but let me talk about when we had the SolarWinds compromised at Army Cyber Command.

The Russians were inside the wire, we knew they were there. It just brings a lot to the classroom when you can bring those stories of real-world problems with data and with the techniques we teach.

[:

How does that background inform your research?

[:

I think we have a variety of different folks that can do research here. We have the civilians, we have the military. My background on the military side is essential for the research portfolio here, so is the civilians. I think they bring a breadth of depth and a discipline that is absolutely essential for what we offer here, both in the classroom and in research.

I think what the military has is that operational experience, working through the problems, the platforms, the questions, trying to understand what these questions these leaders have. As General Donahue is struggling with live data, what is he really getting at as a warfighter, given the scenarios that he's trying to fight.

I think bringing both the technical abilities together and that operational experience is absolutely essential in answering those questions.

[:

Alright, so I usually give guests about 30 seconds to give a pitch for their academic discipline. So, I'm going to give you 30 seconds to explain why a cadet should follow a passion in systems engineering.

Ready? Go!

[:

Alright. So the Department of Systems Engineering offers two programs, Systems Engineering and Engineer Management, and we partner with the Department of Math to do operations research. We support them in operations research and data science.

All four of those things are absolutely essential in the world today, given what we just talked about. If data is the number two priority for our department, and that is echoed in a number of different enterprises, both commercial and in other parts of government, then we absolutely need folks that are systems engineers.

We need people that are disciplinary engineers, but then we can look beyond that, bringing all the engineering disciplines and other multidisciplinary things together to solve these big systems-type problems. All our systems in the Army are systems-level problems, whether our challenges in recruiting, in acquisition, and operational Army all have major systems-level problems that we need leaders to understand.

[:

It's pretty good, it's a pretty good pitch. All right, I'm going to ask you a few rapid-fire questions. Who is, in your mind, right, this is, this is, don't, you can't think about very long, you just answer. Most inspiring leader?

[:

Dick Winters from Band of Brothers.

[:

What's your favorite coding language?

[:

I think Python. I started in R, but I very rapidly noticed most of the world is moving toward Python, and particularly all AI now generally comes out in Python first and then is ported to other languages.

[:

If you could have dinner with any historical person, who is it?

[:

Jesus Christ.

[:

Okay. Most important human skill besides data analyst, you can only pick one.

[:

The most important human skill? Getting up and making your bed.

[:

It's a McRaven, the old McRaven Texas A&M speech, isn't it? What's the hardest class you've ever taken?

[:

It was a machine learning class at Carnegie Mellon University. There were 35 students in the class. Only two of us were U.S. citizens. I spent 40 hours a week trying to keep my head above water in that. The grades were determined on a leaderboard. So, in other words, you had AI tasks, and you had to post the results and move up the leaderboard.

If you weren't in the top 20 percent of your class, then you wouldn't be getting the grades. It started out with, "Rework Amazon's top algorithm. Choose any coding language of your choice, go." And we just went from there. Very challenging class.

[:

What's at the top of your reading list?

[:

Engineers for Victory. It looks at those engineered victories in World War Two, particularly both technical and people leadership-wise, and how those things came together to create victory on various problems of World War Two.

[:

Do you have any parting advice for our audience?

[:

One of the things that I've always pushed through, and I heard as an infantry officer and I've kind of hung on to as far as leading in the Army is to build a team and be its compass. And I think that's been very helpful for me to help shape when you have to build a team and when you at times have to be its compass.

[:

It's a great answer. I thought you were going to say "All systems go," which I thought you were going with.

All right. Well, thanks, Dave. Appreciate it, man. Thanks, sir.

Please be sure to tune in to the Inside West Point Ideas That Impact podcast next month. Remember, you can find this podcast as well as the other podcasts, journals, and books hosted or published by the West Point Press at westpointpress.com. Until next time.

Next Episode All Episodes Previous Episode
Show artwork for Inside West Point: Ideas That Impact

About the Podcast

Inside West Point: Ideas That Impact
Join Brigadier General Shane Reeves, Dean of the United States Military Academy at West Point, as he takes you behind the scenes to explore the applied research and cross-disciplinary work being done by the Academy's scholars.

From high-energy lasers and artificial intelligence to civil-military relations and ethics, this podcast goes beyond the textbook to give you a deeper understanding of the complex issues shaping the modern battlefield. Hear directly from the experts as they make even the most complex topics accessible to a broad audience. Get inside access to West Point's work and see how it's being applied today.

Tune in for upcoming episodes by following us on your favorite podcast platform and follow us on Instagram (@dean.usma), Facebook (Dean of the Academic Board-West Point), and Twitter (@DeanUSMA) for updates.

Learn more about West Point’s academic program at https://www.westpoint.edu/academics

Disclaimer: This podcast does not imply Federal endorsement.
rNvqqZGp9I04WS4M8YSx