The-robots-are-coming-header

The Robots Are Coming…To Make Your Institution More Secure 

With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”1 That’s a quote from Elon Musk, who was speaking about OpenAI, the secretive company he helped found…and then later stepped down from. Musk, of course, has taken a stance that advocates for regulation and government oversight for emerging technologies that could be dangerous. He also gained some fame for saying that AI could be as dangerous as nukes.

Even in academia there has been some push back. Prof. Daniel Barnhizer of the Michigan State University College of Law (in full disclosure I graduated from MSU Law and took Prof. Barnhizer’s contracts course) recently posted a piece on how artificial intelligence (AI) and machine learning (ML) may change the way the legal profession works.2  Specific professions could, in fact, see negative outcomes, such as accounting. This certainly will not spare Information Technology workers.  

At the same time there is a great opportunity for AI and ML to make certain aspects of technology, especially in the context of higher education, better. Lukman Ramsey of Google recently gave a talk at Rutgers University in New Brunswick, New Jersey, on what AI and ML could do for higher education campuses, with a particular nod to student retention and teacher shortages.3

There have been many advancements in AI, but so far they are still in the adoption and learning phase. To that end, Ramsey first highlighted that, over the next five years, there really won’t be much change.4  A lot of this is because the technologies are already far ahead of the real world applications, at least the applications that can be implemented. “The capabilities will already exceed what the industry is using them for,” he said. “In other words, the technologies are way ahead of the applications, and it’ll take a while for the applications to catch up, because people don’t change overnight.”5

Ramsey also highlighted that AI and ML have had multiple hype curves, from the early 1960s when the technologies were first coming online, into the 1980s when three-layer analysis was allowed by algorithms. The 1990s were actually a bit of a down time for AI and ML, as the data wasn’t growing fast enough to justify further development. But recently data has been growing, and AI and ML can go as deep as one hundred layers, meaning complexity and usefulness of AI and ML is increasing. 

In the higher education space there were three main areas of focus regarding AI. The first began before students were enrolled, or had even applied, using AI to identify students that would be good candidates and likely applicants for the school and using it for streamlined admissions.  

Next institutions will use AI to create better experiences for students once they are on campus. This could range from chatbots in student services to better guidance on majors, advancement through programs, and activities to get involved with. It also may include using data driven tools to identify at-risk and troubled students, allowing for earlier and more appropriate intervention that may keep a student in school.  

The third area of application will be during the transition of graduation to help students secure better career outcomes to start their journeys off campus by using data to streamline and make finding opportunities more efficient and better, while also easing the process of applying for and managing job opportunities. Employers may also be engaged in new ways that make hiring more tailored and customized.

While these are outstanding uses of AI and ML, this is also an example of how technology and networks are more intertwined with the operation of higher education campuses and other organizations like research institutions and medical centers that rely on Title IV funding. All of them have sensitive data on their networks that needs to be protected, and is more and more often on a network in a connected way. 

In this day and age attackers are stealthier, quicker, and more advanced. Malicious actors can lay waste to a network using AI in ways that organizational security teams haven’t even thought of as potential weaknesses or attack vectors. That means the use of AI and ML as an ally in defense of these attacks is critical. This is another tool amongst the defenses they already have in place, but the efficacy of AI is growing with regards to cybersecurity. 

There are two reasons why higher education is adopting these measures. The first is that they are alluring targets. They host a lot of data, which may extend to health data, high-end research data, and even data on military and defense programs that begin on campus or use campus facilities. On top of this they host the sensitive student and faculty data, including extensive PII collected for various reasons as well as payment data.  

Additionally campuses are often being asked to do more with less. They tend to have smaller teams, especially at smaller colleges, that are stretched thin and wear many hats. At the same time they often have tight budgets, making it difficult to purchase the security tools they need, meaning they have to get the most out of the tools they have or seek supportive managed services to improve their efficiencies. This means they have to automate processes as often as possible, and AI becomes a tool that can give them advantages or at least level the playing field against certain malicious adversaries.

There are three specific things AI can do for higher education security teams. The first is using AI analytics around security to speed up threat detection and response. This can include cutting through alarms in existing tools, as well as speeding up the ability to see across the network in entirety in an active way, leading to proactive responses and active threat hunting, as opposed to being reactive and waiting until threats occur.  

Secondly, AI will allow for greater capacity to manage vulnerabilities, equipping analysts with technology to discover vulnerabilities and then fix them or speed up the decision timeline for next steps. Networks at institutions have to grow with the organization and at the rate at which technological advances force compliance or adoption. AI is one tool that will equip these organizations to evolve, thrive, and fulfill their vision in a safer cyber security landscape.

Third, is the ability to detect threats in real time, or exponentially faster than any human. This is extremely important in higher education where new users come online every semester, and sometimes even more frequently. Quickly assessing how those users access a network, building profiles and knowing when something is amiss would be too big of a hurdle for a team, much less a single individual. Only with the aid of AI can this be identified and fixed instantaneously to remediate the problem.

Because AI is new, and tools like User Behavior Analytics (UBA) and SOAR are new to some teams, BitLyft has uniquely positioned itself to help organizations adopt and utilize these new technologies. We work with higher education clients all across the country to support their security teams with tools and talent to help keep their organizations safer. 

1 https://www.theverge.com/2020/2/18/21142489/elon-musk-ai-regulation-tweets-open-ai-tesla-spacex-twitter
2 http://www.law.msu.edu/spartan-lawyer/winter-2019/faculty-voices.html
3 https://edtechmagazine.com/higher/article/2020/02/5-year-vision-artificial-intelligence-higher-ed
4 Ibid.
5 Ibid.

About the Author

Thomas Coke

Thomas Coke

Thomas Coke is the Chief Strategy Officer of BitLyft Cybersecurity. He has a JD from Michigan State University College of Law, a BA in Economics from Kalamazoo College and has years of experience in technology startups with a few successful exits. He can be reached at tom.coke@bitlyft.com and on LinkedIn at https://www.linkedin.com/in/thomascoke/
Scroll to Top