About Me

I am a PhD student in Linguistics at UC Berkeley, working at the intersection of computational linguistics, deep learning, and cognitive science. My work examines how humans process language and how these insights can inform the design and evaluation of computational language models. I am particularly interested in using deep learning to study the structure of linguistic representations, both in machines and in the human brain, and in exploring whether modern language models can serve to test hypotheses about human language and cognition.

Latest Publications

Neural Correlates
Neural Correlates of Language Models Are Specific to Human Language
Parra, I. (2024)
NeurIPS 2025 / PMLR
TL;DR: This study investigates the neural correlates between language models and human brain activity, demonstrating that the language representations underlying modern language models align specifically with human language processing patterns rather than general sequence processing. I also show shared geometrical properties of LM and brain representations.
Sparse Features
Interpretable Sparse Features for Probing Self-Supervised Speech Models
Parra, I. (2025)
AACL 2025
TL;DR: I proposed a novel application of SAEs to understand self-supervised speech models through interpretable sparse feature extraction. The method revealed linguistically meaningful patterns in SAE features, offering new insights into what these models learn about phonetic and phonological structure.
Adaptive Compute
Adaptive Compute Efficient Learning via Conceptual-Criticality
Parra, I.; Bharathi, M.; Kumar, M.; Kumar, P.; Mishra, P. (2025)
AAAI 2025
TL;DR: We introduced a conceptual-criticality framework for adaptive compute allocation in neural networks, enabling models to dynamically adjust computational resources based on input complexity. This approach achieves significant efficiency gains while maintaining or improving model performance.
Morphological Typology
Morphological Typology in BPE Subword Productivity and Language Modeling
Parra, I. (2024)
NeurIPS 2024
TL;DR: This project explores how morphological typology affects BPE tokenization efficiency and language model performance across diverse languages. I demonstrated that morphology has a major effect on tokenization, and that this also affects training.
Speech Enhancement
Noise Be Gone: Does Speech Enhancement Distort Linguistic Nuances?
Parra, I. (2024)
ACL 2024
TL;DR: I investigated whether speech enhancement techniques preserve or distort fine-grained linguistic features. My findings reveaed that noise reduction improves overall intelligibility, and that enhancement methods do not significantly alter phonetic distinctions critical for downstream linguistic analysis.
Gender Biases
UnMASKed: Quantifying Gender Biases in Language Models through Linguistically Informed Job Market Queries
Parra, I. (2023)
EACL 2024
TL;DR: Using linguistically informed job market queries, I developed a methodology to quantify gender biases in masked language models. The analysis revealed systematic biases in occupational associations and provides insights into how language models perpetuate societal stereotypes.

Visuals

Sinusoidal Position Encodings

Demonstrating how sinusoidal positional encodings work in transformer models

Query, Key, and Value

Visualizing the Query, Key, and Value matrices in attention mechanisms

The GAN Objective

Breaking down the mathematical formulation of Generative Adversarial Networks



Some of my favourite quotes

  • "God could have caused birds to fly with their bones made of solid gold, with their veins full of quicksilver, with their flesh heavier than lead, and with their wings exceedingly small. He did not, and that ought to show something. It is only in order to shield your ignorance that you put the Lord at every turn to the refuge of a miracle."
    Galileo Galilei (1564-1642)
  • "All models are wrong, but some are useful."
    George E. P. Box (1919-2013)
  • "Intuitions have been tacitly granted a privileged position in generative grammar. The result has been the construction of elaborate theoretical edifices supported by disturbingly shaky empirical evidence."
    Wasow & Arnold, (2004; p. 1482)
  • "Being a native speaker doesn’t confer papal infallibility on one’s intuitive judgments."
    Raven McDavid (1985)