This interview was conducted in December 2024. It has been transcribed and edited for clarity.
HTR: Let’s just start off with a Harvard intro, and then tell me a little about yourself.
Aneesh: I’m Aneesh, I’m a CS and Neuroscience joint concentrator and I’m in the AB/SM
program for CS living in Lowell. I work in three research labs (2 at Harvard, 1 at MIT)—the
Kempner Institute, where I use deep reinforcement learning algorithms to mimic intuitive
psychology. Essentially, give AI models intuition akin to humans.
I also work at the Harvard Computational Robotics Lab—we use reinforcement learning and real
robots, as well as the MIT Fiete lab (where we study how learning can be made more efficient)
HTR: What does being an innovator mean to you? And how does this manifest in your
daily life?
Aneesh: Being an innovator means asking questions that have been itching at you—innovation
is creativity, and especially within my research I tend to ask the question of “what if” and how we
can draw parallels between AI and the human brain/mind.
HTR: What sparked your interest in CS + Neuro?
Aneesh: Since middle school, I’ve been interested in the Turing question (essentially: can
machines think?). If machines are able to mimic human intelligence and we can scale that
usage up, I believe we can solve the world’s biggest problems. Additionally, sci fi media played
a big part, especially in piquing my interest with neurophilosophy (and figuring out what counts
as consciousness, intelligence, and learning)
HTR: Has your time at Harvard changed your perspective as to whether machines can
think or learn?
Aneesh: Having been in spaces like Kempner Institute (where I get to work on and see the most
cutting edge AI), I don’t think we’ll ever get to AI consciousness, but within LLMs there’s
something so unique about them that goes beyond prediction—their emergent ability to think
allows reasoning but NOT consciousness—humans are still unique neurobiologically.
HTR: What’s the distinction between thinking/reasoning and consciousness?
Aneesh: The former is a step by step approach to solve a problem (LLMs can do that type of
‘reasoning’), whereas consciousness is tied to the ego/identity. It’s not just thinking about a step
by step plan, it’s who am I? What is my awareness? How does this affect me? What are my
motivations? As a whole, that involves a lot more things that reasoning doesn’t.
HTR: Can you share some of the key stories, challenges, and successes from your time
at Harvard?
Aneesh: I’m most proud of the work I did at the robotics lab—robots face significant challenges
when trying to unlearn a little bit of the information they’ve learned in order to adapt in different
environments (humans are able to do this very easily due to life-long learning), I developed an
algorithm to let any system (not necessarily a robot) continuously adapt and learn in the real
world without having to do it from scratch, which is currently being used in different labs across
the country.
My biggest challenge was that my freshman year, I came into Harvard with very little confidence
(I didn’t think I could accomplish all of the things I wanted to do), so I limited myself in the
classes that I took, opportunities I pursued, and research I could do. Being at a place like
Harvard can be very intimidating, so I initially set my goals super low. One thing that changed is
sophomore year, I wanted to think about “what if I did neuro?” or “what if I took a graduate
class?” which built a lot of confidence that carried through my junior and senior year — never
would my freshman self ever think that I would apply to be a Rhodes Scholar, but just because I
said ‘what if,’ I applied and then I got it. All these experiences have taught me that if I want to put
my mind to something, I shouldn’t undersell myself and do it.
Don’t undersell yourself and have faith in yourself. You can really do anything you set your mind
to.
HTR: Are there any classes at Harvard that have been fundamental to allow you to
achieve all of these amazing feats?
Aneesh: I loved all of my neuro classes. In particular, Neuro 105: Systems Neuroscience sticks
with me, I learned how honeybees transmit knowledge via cultural evolution.
It was cool having two years of Sanskrit and a Sanskrit philosophy course, as well as Eastern
linguistics and philosophy classes as background for approaching AI and AI ethics. These
courses were important in grounding myself and thinking about how to really tackle different
problems with different perspectives.
HTR: How about people or mentors? Are there key figures in your life that have
contributed to your success?
Aneesh: Harvard Dharma really gave me a family and a lot of upperclassmen mentors—a lot of
things I’ve done are because of the upperclassmen when I was a freshman (now alumni) that
lifted me up. They always supported me and built my confidence.
Other mentors include the postdocs and faculty working at the Computational Robotics Lab
(specifically Zhiyu and Professor Hank Yang), it’s a super new lab and they’re both people
where can I enter the room anytime and they don’t treat me like I’m an undergrad, intern,
assistant, etc. They make a comfortable space for me to open up and question different things
that they’ve said, bring up my own ideas, push me and make me feel intellectually confident and
a much better researcher. I would say the same applies to Sam Gershman at the Kempner
Institute (also my faculty advisor), he’ll still make time for me even if he’s incredibly busy.
HTR: How did you go about finding a community on campus?
Aneesh: I didn’t find out about Harvard Dharma until the spring of my freshman year—don’t get
me wrong, the club fair was cool and I walked around quite a bit, but it was more becoming
friends with really cool people, and naturally coming to the conclusion that the clubs they’re in
are probably cool too.
HTR: Looking into the future, what are you excited to work on?
Aneesh: There are two big topics that I’m interested in:
- Building up AI reasoning (puzzles like Arc AGI)
- Can we create AI scientists (such that they can do science—think about questions and then
run experiments autonomously)
And obviously robotics.
HTR: Anything else you want to mention?
Aneesh: Having been able to take classes at MIT (Grad level classes in AI), as well as being
able to work in labs at MIT, Stony Brook etc., I’ve found that there’s something really special at
Harvard in terms of innovation in AI culture. It’s super uplifting as opposed to intensely
competitive. Harvard is chasing to answer questions, which makes collaboration so much more
natural. So grateful for this research culture.

