top of page

TEAM

QED

Cisil Karaguzel, Ming Zhang, Hatice Mutlu, Adnan Cihan Cakar, Matthew Gelvin

clear.png

The state-of-the-art language models have achieved human-level performance on many tasks but still face significant challenges in multi-step mathematical reasoning. Recent advancements in large language models (LLMs) have demonstrated exceptional capabilities across diverse tasks, including common-sense reasoning, question answering, and summarization. However, they struggle with tasks requiring quantitative reasoning, such as solving complex mathematical problems. Mathematics serves as a valuable testbed in machine learning for problem-solving abilities, highlighting the need for more robust models capable of multi-step reasoning. The primary goal of this project is to develop a customized LLM that can provide step-by-step solutions to math problems by fine-tuning a base LLM using a large mathematical dataset.

Screen Shot 2022-06-03 at 11.31.35 AM.png
github URL
bottom of page