Leading Through Disruption
A Culture Fostering Innovation With An Ethical Mindset | Greg Morrisett, Dean & Vice Provost at Cornell Tech
Leading Through Disruption
Greg Morrisett, Dean and Vice Provost at Cornell Tech, shares his key leadership lessons on AI regulation in academia and higher education innovation in this Leading Through Disruption interview with Anastassia Lauterbach, Managing Director of EMEA at The ExCo Group.
Lauterbach: Could you please talk about the path that led you to today?
Morrisett: I’ve been in academia my entire life and was pleased toiling away at research and teaching. Initially, I had no dreams of moving into a leadership position beyond my lab. I was at Harvard when my Associate Dean was promoted to Dean of the faculty. I succeeded him in the Associate Dean role and discovered that I enjoyed it. Then, Cornell had an opening for a Dean position at the College of Computing. Cornell was my first love of an academic institution, as I’d started my career there. I spent the next four years in Ithaca running the Computing and Information Science unit and then was approached to run the new Cornell Tech campus in Manhattan.
Lauterbach: What changed in the leadership of academic institutions today from when you’ve just started?
Morrisett: Fortunately, the cohort of leaders diversified a lot. When I started, it seemed like there were no women as presidents in the Ivy League, and then it was just Drew Gilpin Faust at Harvard. Since then, all of the Ivies have cycled through leadership diversification. Changes like this are critical as academics are asked to address many things.
Universities are expected to have a stance on all political issues while balancing freedom of speech and academic freedom. We aim to produce an environment that’s safe and conducive to learning and research.
Lauterbach: Do you observe a shift in how international students might perceive US Universities?
Morrisett: After the Cold War, the US and the UK were considered premier places for higher education worldwide. In the past decade, Singapore, Korea, and China have all invested heavily in educational systems, producing world-leading institutions. In the UK, we’ve seen challenges around funding. In the US, public institutions are no longer supported by the state on a meaningful scale. Berkeley, for example, gets around 9 % of its budget from the state of California. Is it still a public school? And yet, Berkeley is one of the strongest institutions in the world. Seeing strong universities emerging around the globe is a natural and good evolution in the sense of having more access and availability.
The US still has some of the best universities in the world. At Cornell Tech, over half of the graduates come from around the world.
Lauterbach: How do you define leadership?
Morrisett: Leadership is listening. While managing a university, you’re not driving the train. The faculty are the real owners of the direction, speed, and excellence. Your job is to remove as many obstacles as possible to let educators shine. Leadership is about cajoling and finding ways to work together as a team, serving the interests of all parties.
Lauterbach: How do you define innovation in the context of academia?
Morrisett: Academic institutions are quite slow, and sometimes that’s good. Passing fads come through, and academics let them wash over everybody else and wave goodbye. But it can be a struggle, especially in the context of silos or fences that prevent collaboration and interaction where you need it the most.
AI is an excellent example of that. It must draw on an even more comprehensive array of fields, such as neuroscience, cognitive science, law, and engineering. Yet, we’re organized into departments by field instead of an emerging opportunity like AI. How do we reimagine that?
One fascinating thing at Cornell Tech is the luxury of building a new campus without those existing silos. There is a real opportunity to introduce an interdisciplinary faculty from computer scientists to lawyers and business. When we hire, we ask whether a new colleague is willing to work with people outside his or her field.
Things aren’t without challenges. Different mindsets exist in an engineering or business school, and medicine isn’t comparable to law. You need to find a handful of specialists who can speak the language of several sides to facilitate collaboration.
Going back to the ideas of a liberal education, it became increasingly important. Making students broadly informed, providing a platform for building more profound studies, and specializing afterward is easier said than done. Similarly, if we flip it around, it becomes essential for humanists to understand technology. Some of them still believe AI is going to destroy our humanity rather than augment and support it.
Today, AI tools are just an accelerator. 20 to 50 % of productivity gains might come from AI copilots. But how can we get people excited again about learning rather than cheating while using AIs? This is a daily challenge to work on.
Lauterbach: Belgium, Germany, Austria, and the Netherlands are moving into mass retirement this decade. These countries failed to establish sufficiently robust migration policies and develop automation strategies to balance the inevitable decline of GDP. Besides, the proliferation of digital assets in the economy makes everyone vulnerable to cyberattacks. How can we address these societal, economic, and technological challenges when we talk to today’s politicians and policymakers?
Morrisett: No country in Europe or North America can succeed in the future without really robust immigration planning. If you are in STEM, an OPT visa is helpful. It lets you work for three years and establish traction, for example. We ought to look at the fastest-growing parts of the world, like Africa, and consider how to nurture immigration in a controlled way. A balanced immigration plan and policy would lift everybody, but it’s politically challenging in an age of more strident nationalism and anti-immigration rhetoric. Thoughtful regulation is required to address technologies.
The EU AI Act’s approach is a bit heavy-handed. Conversely, in the US, AI regulation is a bit too loose. To be effective, regulation must focus on outcomes rather than technologies while ensuring best practices. Besides, we must understand that safety protocols for healthcare, finance, and creative arts differ.
Starting with fixing the basics might bring a lot of positive change. There is still very little attention or understanding of software and its role, e.g., in hospitals and medical institutions. Lax standards provide an environment for phishing attacks. For example, many medical institutions aren’t patching their systems, which allows criminals to target them.
Cyber insurance will mitigate a lot of behavior going forward because it’s becoming increasingly impossible to underwrite the risks. We’re seeing good things coming out of NIST and similar frameworks around processes and practices for software developers. Still, accountability for when you don’t follow best practices is paramount. Simple things like memory-safe languages or software bills of materials could be helpful, even if they aren’t a silver bullet to fix everything.
With the rise of AI, it will get worse before it gets better. Misinformation and its enablement by AI tools will make it tough to catch up in a technical sense about how to prevent it over the next few years.
Lautebach: Cornell Tech created an Urban Tech Hub. Could you please talk about this initiative?
Morrisett: Cornell Tech was established because New York City wanted to develop its tech ecosystem. The city invested $100 million into the establishment of this campus. This evolution gives us a sense of purpose and a mission. Our Urban Tech Hub is a perfect example of working with city agencies to help them with various challenges.
There are simple ones, like the New York Parks Department, which has a database that tracks every tree in New York City. As climate change accelerates, tree coverage and canopies provide a really important cooling function. There are more complicated things like electrification. How will we charge all the cars and power all the buildings? New laws in New York, for example, require us to significantly reduce greenhouse gases by the 2030s.
Finally, major challenges like public health call for sophisticated models for tracking the spread of infectious diseases. Technology can play a tremendous role in solving a city’s problems. A university can be vital in stepping in as a sufficient budget is always a challenge.
Lauterbach: Taking the level of challenges, what are the most critical success factors for your students to become great leaders?
Morrisett: One of the things that is critical is disciplinary respect. We have a program called ‘Studio,’ where we intentionally build teams of students from different backgrounds. You might bring together a business student, a couple of engineers, a lawyer, and a designer. We bring companies to pose real-world challenges to these teams and mentor students while they engage in interdisciplinary problem-solving. And they hate it at the beginning! Engineers want to be with engineers, and business people want to be with business people. They don’t speak the same language. Any challenging opportunity, however, calls for teamwork. We’re intentionally creating an environment for students to understand the value of collaborating and diversity.
We’re using the tools to think about what business students should understand about AI and AI-enabled transformation. Conversely, engineers should understand the business side of the AI’s equation and its ethical and moral aspects. For example, we have a competition for startups created here on campus, and every one of those must ethically review their propositions before receiving funding.
Lauterbach: What would you advise a CHRO of a large traditional business to prepare his or her company for embracing technology literacy and AI?
Morrisett: Everybody should have a strategy. I have just met with many financial institutions who wondered how much ROI they would gather from investment in AI. Most of them have yet to see a return. AI, though, isn’t about a killer application. It is about pinning AI pilots and efforts around a robust strategy and having a sense of direction and purpose what technology should solve for.
Lauterbach: How do you define your own success in your current role?
Morrisett: Cornell Tech began as a startup campus and grows about 10 % yearly. We were created around the idea of interdisciplinary character. I worry about how we will keep this mentality while developing and that silos might reappear and rematerialize as we scale. We must ensure a culture that fosters innovation with an ethical mindset. This might be in contrast with Silicon Valley, where everything is driven towards building fast, breaking things, and making a pile of money.
New York will not stomach that. We must embrace a broader range of goals and be innovative while looking at this broader scope. In AI and technology, financial KPIs aren’t enough.
Lauterbach: Who are your role models who influenced you on your path?
Morrisett: Many academic leaders came before me that I idolize and look up to. My current provost, Michael I. Kotlikoff, is a good example. When I was doing my PhD, my advisors, Jeannette Wing and Robert Harper, instilled a sense of deep integrity.
The role of academia is more than just writing papers and proving theorems. It’s really about having an impact on individuals. Nothing is more fulfilling than seeing that light bulb go off over students’ heads when they get something. It is also about engaging with and embracing the social and economic ecosystems around them. Universities have a responsibility to serve their local communities.
This interview on AI regulation in academia with Greg Morrisett, Dean and Vice Provost at Cornell Tech, is part of Leading Through Disruption, our series featuring powerful conversations with leaders navigating this era of relentless change.