Nikola Jurkovic: Top Ten Seniors in Innovation
This interview was conducted in December 2024. It has been transcribed and edited for clarity.
HTR: What does being an innovator mean to you, and how does this manifest in your daily life?
Nikola: Yeah, I’d say it means taking ideas seriously, even when they might sound crazy. So a
lot of the reason why I started working on AI safety in the first place was just because I was
thinking about it in 2021 or 2022 and even earlier, and I was like, Oh my God. If it is really true
that we’ll get human level AI sometime in the next few decades, then this just seems like the
most important thing to be working on.
HTR: And what sparked your interest in AI safety and anything else that you joined, like, what?
What sort of key experiences, interactions, student groups?
Nikola: A lot of human level AI, or superhuman AI, and what that would do to society. And this
was way back before ChatGPT – this was around 2020, and I was just like, “Oh my God, it
seems like society is not preparing for this correctly.” And then I read some books at Harvard. I
was one of the first members of the AI Safety Student Team.
Some important moments were reading The Precipice by Toby Ord around 2021, where he just
makes the argument that AGI, or human-level AI, is coming this century, and that we have not
figured out a way to align them to human values. It seems like a prerequisite for humanity’s
survival past this century has not been met. So I think that’s around when I decided to work on
AI safety.
Also around that time was the first time I played around with GPT-3 in the OpenAI playground.
So before it was even a chatbot, it was just a model that completed text. Using GPT-3 for the
first time was extremely surprising. I think I didn’t expect that level of AI capability anytime soon.
I thought that maybe I’d live to see this in the late 21st century, but it was a complete surprise to
see that happening in 2020 or 2021, and then a somewhat less surprising important moment
was the creation of ChatGPT, which is even more capable.
HTR: Are there any classes at Harvard that have been really fundamental and changed the way
that you pursue your academic interests?
Nikola: I’d say I was pretty set on AI safety even before I came to Harvard. There have been
some courses that have been ahead of the curve, compared to other courses. One of those is
Evolving Morality by Josh Greene, which has existed for at least six years. So I think Josh
Greene was very ahead of the curve in predicting that AI would be a big deal. And another one
was an AI policy class from Jonathan Zittrain.
HTR: And are there people or mentors that you’ve met, perhaps in AI safety student groups, or
people you met before college, when you originally got interested in AI safety, that have
contributed to your success?
Nikola: Not directly from meeting people, but I’d say a lot of people’s writings have influenced
me a bunch. I’d say that the writings of Carl Shulman, Ajeya Cotra, and Daniel Kokotajlo stand
out because one thing they have in common is that they’re willing to make detailed models and
falsifiable predictions about the future of AI and its effects on society.
And I really admire their capability to plan under massive uncertainty: when you’re making
models about AI and its effects on society, there are so many unknowns, and I think it takes
someone who’s a really clear thinker to collapse all those unknowns and make predictions
despite them. And many of those predictions have turned out to be extremely accurate despite
all the uncertainty, especially those from Daniel Kokotajlo.
HTR: How did you go about finding community on campus?
Nikola: So in the pre-ChatGPT days, it was pretty tough. I’d say there were only a handful of
people thinking about AI safety. A lot of us met through various reading groups on books
connected to AI safety. And over time, despite the fact that most of society was oblivious to how
fast AI was advancing, there formed a small group of people who were thinking about AI and AI
safety, and that group eventually became the AI Safety Student Team. One thing I admire
deeply about all those people is that they were willing to think about this stuff before everyone
else was thinking about it. They were willing to take those crazy ideas seriously. And now many
of them are doing really, really important work at various organizations.
HTR: Are there other student groups, apart from the AI safety community, that you’ve been
involved with or found valuable?
Nikola: Initially, I was involved with the Effective Altruism club.
I do consider myself someone who’s trying to improve the world as effectively as possible, so it
was a good place to spend my time; but Effective Altruism is, at its core, about identifying the
most important problems to work on and finding ways to fix those problems.
And it just so happened that a lot of the early AI safety thinkers were involved with Effective
Altruism. Effective Altruism is also associated with taking somewhat crazy ideas seriously, and
being willing to think about solving problems that the rest of society isn’t focusing on. So I was
pretty involved with EA in my first two years at Harvard, but after a while, I was just like, EA is
about finding the most important problem, and we have found it. It’s AI safety. So now I feel
more at home in the AI Safety club, which is just focused on finding solutions to AI safety.
HTR: That makes sense. What advice would you give to your younger self about navigating
Harvard?
Nikola: I found that a lot of the value from Harvard, for me personally, has not been in the
knowledge I gained from classes; it has been mostly in the people, both the students and the
professors. Having good conversations with people has been much more important to me than
listening to lectures or doing problem sets. And looking into the future, I’m going to assume that
some variant of the answer to this question will include AI safety.
HTR: But what are you excited to work on? Maybe, are there subfields within AI safety that
you’re particularly interested in?
Nikola: You predicted correctly that my plan after graduation is to work on AI safety, and one of
the main subproblems, or subfields, that interest me the most is evaluations. That is, how do we
make benchmarks and tests for AI systems that can detect whether those AI systems are
currently dangerous, or will soon be dangerous?
I would say that a lot of the uncertainty in the field of AI, about how safe AI systems are, is
because we don’t have ways to measure dangerous capabilities. And so I’m pretty excited about
making much better ways to measure those capabilities. Currently, I’m interning at METR, which
is an AI evaluations nonprofit. METR is focused on, among other things, making evaluations for
whether AI can speed up AI research, because once AI starts speeding up AI research, it’s only
natural to conclude that the rate of progress will go way up, and thus the other dangerous
milestones will be hit way quicker than they would have been counterfactually. I’m generally very
excited about AI safety agendas that focus on what to do if AGI is only a few years away; I
personally expect that AGI is around three years away, and that this will cause unprecedented
disruptions to society. And I think that agendas which can help us prepare for that moment and
that transition are the most important.
HTR: Is there any advice that you would give to students coming into Harvard about how to
make the most of the opportunities here?
One thing that I think has helped me a lot was thinking about the progress in AI and thinking
about how that affects my life; because if it is true that AGI is coming this decade, then this has
huge implications on one’s life plans, including decisions around college. I would say that many
of the jobs that people are currently starting their degrees to eventually do will probably not exist
by the time they finish those degrees. Many people are unaware of just how big of a deal the
creation of human-level AI will be. And I think there are a lot of opportunities in guiding that
transition in a positive direction, both in technical AI safety and in AI policy. A life plan that
doesn’t take into account AI progress is very likely to be derailed very, very soon. So the main
advice I have for planning one’s life is to think about AI progress and how that affects you.
Because if you don’t, your plans will be derailed.