I am a Specially Appointed Assistant Professor in the Okazaki Laboratory at the Institute of Science Tokyo, working with Prof. Naoaki Okazaki. Before June 2025, I led applied LLM projects as Lead AI Engineer at ELYZA, Inc., delivering language technologies for healthcare and industrial partners. My broader goal is to build language technologies that enable reliable human–machine collaboration.
Right now I’m especially focused on making large language models safer, stronger reasoners, and more controllable generators. Current directions include:
- LLM safety and fairness — Our Findings of EACL 2024 paper on in-context gender bias suppression shows how prompt-space interventions can mitigate harms without retraining.
- Test-Time Compute — Our preprint introduces Best-of-∞, which approximate that LLM’s performance assuming infinite computational budget with that of finite budget (preprint).
- Discrete diffusion language models — I’m developing discrete diffusion approaches for grounded, controllable text generation (manuscript in preparation).
Alongside these projects, I continue to study how large language models memorize and retrieve knowledge and how to interpret internal representations. Earlier highlights include:
- Tracing the roots of facts in multilingual language models — EACL 2024 (with Xin Zhao and Naoki Yoshinaga)
- What matters in memorizing and recalling facts? — Findings of EMNLP 2024 (benchmarks for knowledge probing)
These works have been recognized with the 2025 ANLP Sponsor Award (Hitachi), the 2021 IIS PhD Student Live Second Prize, and fellowships from JSPS (DC2 and PD) and Microsoft Research Asia’s D-CORE program.
I was also involved in national initiatives such as the Cabinet Office’s SIP Integrated Healthcare System project, METI/NEDO’s GENIAC accelerator, and AIST’s ABCI Large-scale Generative AI support program.
For full publication details see the publications page, and for funding and honors visit honors & funding.
