Brown Launches AI Tool ‘Transcribe’ as Part of Tech Initiative
On October 20, Brown University took a significant step toward integrating artificial intelligence into its academic and administrative framework by launching Transcribe, an AI-powered transcription tool. Developed through the Brown AI Sandbox project, the initiative is part of a broader effort by the Office of Information Technology (OIT) to make AI resources widely accessible across the campus.
According to Christopher Keith, vice president of information technology and chief information officer, the tool was built in collaboration with faculty and staff. Keith noted in an email to The Herald that Transcribe is more cost-effective and performs better than many commercial alternatives.
AI Integration Across the University
Transcribe is just the beginning of Brown’s push toward institutional AI adoption. Last spring, students were granted free access to Google AI services through their university accounts, including tools like Google Gemini Chat and NotebookLM. These AI services are designed to support both academic and administrative functions, and are part of a growing suite of AI-powered tools offered by the University.
Michael Littman PhD’96, Brown’s associate provost of artificial intelligence, emphasized the importance of making these tools available to all students and faculty, not just those with the financial means to purchase premium services. “These tools are extremely relevant to the business of universities,” Littman said, highlighting that many researchers, especially in the social sciences, can benefit from AI-powered transcription for interviews and qualitative analysis.
Language Support and Data Security
Transcribe currently supports a wide array of languages, including Arabic, Chinese, Dutch, Spanish, Italian, German, Russian, Portuguese, Japanese, French, and Vietnamese. However, unlike the Google AI tools, which have received the University’s highest data security clearance (level three), Transcribe is presently cleared for levels one and two. This difference means that while Transcribe can handle sensitive information, it is not yet authorized for the most critical data types.
To ensure user privacy, Google is prohibited from using University data to train its models. However, a limited number of OIT staff members can access usage logs — including chat query content — under rare and specific circumstances. This is in line with the University’s Acceptable Use of Information Technology Resources Policy, which states that while the University reserves the right to monitor and access data, it also respects the privacy expectations of students, faculty, and staff.
Future AI Tools and Educational Collaboration
Looking ahead, the Brown AI Sandbox project plans to expand its offerings. One upcoming tool, Librechat, will consolidate various large language models into a single chatbot interface, streamlining access and functionality for users. The University is also fostering collaboration between the OIT and the Sheridan Center for Teaching and Learning. This includes work by the Generative AI in Teaching and Learning Committee, co-chaired by Littman and Eric Kaldor, the Sheridan Center’s director for assessment and interdisciplinary teaching programs.
“I hope more students and faculty will feel comfortable exploring the capabilities and limitations of large language models in their fields of expertise,” Kaldor said. He stressed the importance of developing critical AI literacy among the academic community.
Student Awareness and Utilization
Despite the launch of these tools, some students remain unaware of their availability. Hilary Nguyen ’27 recently discovered she had access to Gemini’s premium features but did not realize it was due to Brown’s partnership with Google AI. She mentioned using AI occasionally to understand complex biochemistry figures but still prefers ChatGPT for most tasks.
Nick Burleson ’29, another student, also favors ChatGPT and was unaware of his access to Google AI tools. He expressed concerns about potential misuse of AI and suggested that the University implement stricter guidelines to prevent abuse.
Faculty Involvement and Classroom Use
Faculty members are also grappling with how to incorporate AI into their teaching. Seth Rockman, professor of history and director of undergraduate studies in the department, issued a memo earlier this semester urging fellow faculty to prepare for increased use of AI in the classroom. He warned that students must cultivate skills that surpass what AI can replicate to remain competitive in the job market post-graduation.
Meanwhile, some faculty have already started integrating AI tools into their curriculum. James Valles, professor of physics, encourages students to use AI for assignments and even uses it himself to design course materials. His syllabus includes guidelines for how students should credit and reflect on their AI usage.
“I support the efforts that the University is making to assess the current use of generative AI and to set expectations for how it might be leveraged in the future,” Valles said.
This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.
