Skip to main content Skip to secondary navigation
Main content start

How Stanford Journalism alums are innovating — and scrutinizing AI in newsrooms

Stanford Journalism alums are at the forefront of efforts to define and implement best uses of AI technologies in newsrooms

After the September shooting death of right-wing activist Charlie Kirk, The New York Times’ Artificial Intelligence Initiatives team analyzed dozens of Kirk's public debates going back to 2017. The result is a fascinating examination of Kirk’s debate tactics and how they were refined over the years to become highly shareable moments for social media.  

Dylan Freedman (MA 2018), who was recently named as the Times' AI projects editor, said his team had conducted similar analyses before, but those took months to complete. With the help of AI, the Kirk project took only two weeks. 

To Freedman, the project was emblematic of AI's power. "When wielded with a lot of principles and careful uses, [AI] can genuinely be very helpful and help people tell more ambitious stories," he said.

Artificial intelligence is rapidly transforming whole industries, and journalism is no exception. As the use of AI becomes more ubiquitous in newsrooms, Stanford Journalism Program graduates are finding themselves experimenting, evaluating and ultimately implementing these technologies while also confronting potential pitfalls.

Whether it's leading a team to develop tools for investigative reporters or experimenting with automation and creating workflows to help under-resourced newsrooms become more efficient, the Stanford Journalism alums are at the forefront of efforts to define and implement best uses of AI technologies.

Cheryl Phillips, Hearst Professional in Residence, said Stanford graduates equipped with the foundational principles of journalism, along with hands-on experience in multimedia and data classes, are uniquely positioned to help newsrooms harness the power of AI to improve newsroom systems and to tell stories that otherwise would not be possible.

"When [graduates] get to a newsroom, they can be some of the critical thinkers that are needed to be able to address, 'How is the newsroom going to use [the technology]?'" said Phillips.

Freedman credits Stanford Journalism for helping him understand the variety of roles and responsibilities within a newsroom — knowledge that he uses in his job daily. As someone who already possessed strong technical skills coming into the program, Freedman said the core courses — News Writing Fundamentals, Multimedia Reporting and Public Affairs Data Journalism — provided him with foundational principles of the craft.

"I think it instilled such a deep ethos for how to responsibly hold people to account and report the news fairly," Freedman said.

Freedman, a former Google software engineer who pivoted to journalism at Stanford, joined The New York Times in 2024 to work on its newly formed AI team. The team works on tools that can help reporters uncover stories while also telling stories to help the public better understand these emerging technologies.

"We're really focused on … using AI in service of the mission of journalism and being transparent, ethical and always having human guidance," he said.

Freedman is developing an internal tool that will help journalists analyze text and multimedia. Since joining The Times, he has frequently used AI to conduct large-scale analysis of massive amounts of unstructured data, such as politicians' and controversial figures' public speeches. Freedman is also investigating the technology itself and its potential for harm. For example, he used AI to comb through a 3,000-page transcript to study how one man's AI chatbot conversation devolved into delusion.

"I'm kind of occupying this uncomfortable space of using this technology, but then the technology has real potential for harm," he said.

Freedman said AI is an excellent research tool, but he doesn't believe in using it to write stories.

"I never use AI [to write] because I really care about my voice," he said. "I don't want my voice to be millions of trapped souls of writers that the AI has been trained on."

Caroline Ghisolfi (MA 2021), Deputy Data Editor at the Houston Chronicle, said her newsroom uses AI cautiously and always leaving plenty of space for manual checks. As someone who oversees a team of reporters and manages projects involving advanced data analysis, Ghisolfi said she often helps her colleagues understand when AI should be used in an investigation and, just as importantly, when it shouldn't.

"My rule is … only use AI to help you do things that you already kind of know how to do," said Ghisolfi, meaning a reporter must be able to explain and independently verify the accuracy of the results.

"You just cannot expect [AI] to do that in a way that's going to be accurate and defensible and bulletproof in the way that we want all of our data analysis to be," she said. "We use AI only when it's going to save us a considerable amount of time or allow us to do something that otherwise we couldn't do at all."

Examples of past successful projects involved using specially trained models to process vast amounts of data. One recent investigation examined how law enforcement used a public surveillance system, which required analyzing hundreds of thousands of search terms. The investigation revealed officers were often misusing the tool. In a previous role at the Austin American-Statesman, Ghisolfi also used machine learning to analyze 240,000 public records disputes in the office of Texas's general attorney.

Ghisolfi said she often goes back to a saying she heard in one of Cheryl Phillips' data journalism classes, "If your mom tells you she loves you, check it out."  

"Just because a source says something, just because someone in the field says something, that's just not sufficient," Ghisolfi said. "Usually, you would have the rule of having three independent sources and trying to see if they all say the same thing before you publish."

She said the same rigor must be applied when AI is part of the mix.

"There are always, always errors, and there's just no way for you to be … confident in the way that you are using AI if you're not also spot checking a large amount of what the AI is doing," she said.

Stanford Journalism students now get a first-hand look at AI tools through the required course AI in Journalism, taught by Lecturer Djordje Padjeski, who also serves as associate director of the JSK Fellowships program.

Padejski said that with so many changes happening in the world of AI, his course is meant to build students' AI literacy, demystify the complex technologies behind it and encourage critical analysis of its risks. 

"The more that we know about it, the more we are in control, the more that we can utilize it in smart ways," Padejski said. "My mantra is, no AI is going to replace the journalist. But a journalist using AI might replace those that are not.

The class was deeply influential for Hannah Woodworth (MA 2025), who gravitated toward the topic and focused her coursework and internships on learning as much as she could about how journalism and technology intersect. 

As a Rebele intern with the American Journalism Project, she conducted research around specific products used in newsrooms. She later became a data intern at Reuters and is currently working with the organization to evaluate AI conversion and extraction tools for data manipulation. 

Woodworth is also working on a Brown Institute Magic Grant project that was born out of Stanford’s Computational Journalism class — developing a knowledge graph with various AI capabilities to create tools that support news gathering, verification and production work. 

Woodworth said her time at Stanford gave her the confidence to dive into technical topics and to challenge herself to learn something new. 

"[Stanford Journalism is] equipping journalists with the confidence that you can do technical things," she said. "I don't know how to do something, but I know I'll be able to figure it out."

More than the skills and mindset, Woodworth said, Stanford is unique in providing access to thought leaders who are constantly thinking about how to advance media and storytelling through technology. 

"There are people out here who are talking about things that are incredibly dynamic and multi-dimensional," she said. 

In multimedia and data journalism classes, students learn the basics of coding and programming for data analysis. Many of the project-based courses, including Public Affairs Data Journalism, Big Local Journalism, Watchdog Reporting, and Exploring Computational Journalism, expose students to how AI is used in journalism.

Guilherme Guerreiro (MA 2023) said that without Stanford Journalism, he wouldn't be able to do his current role. Through data journalism classes and independent study, he learned a range of technical skills, including data analysis, coding, and using tools such as Python and QGIS.

"I hadn't foreseen that I was going to be involved in this world, but because I was taught to be flexible, because Stanford fosters this environment, I was ready to embrace it," Guerreiro said.

At Folha de São Paulo, a daily newspaper in Brazil, Guerreiro serves as a bridge between traditional journalists and software engineers, working as a data journalist and AI developer. Guerreiro said he helps develop AI-driven tools to support newsroom processes — from generating videos from an article to automated image descriptions. His team also built products for audiences, such as a chatbot that provides information about breast cancer

"AI is a tool. It's a way to convey and to achieve that journalistic mission. But it shouldn't take away. It shouldn't substitute." he said. 

At Stanford, Özge Terzioğlu (MA 2023) also focused on learning all she could about data journalism and coding with Python — technical skills she now uses every day in her role as Automated Content Specialist with Gannett. Serdar Tumgoren's Building News Apps class was integral in helping her think through how to bring a technical project to life from start to finish, she said.

"That is basically my whole job now," Terzioğlu said. "The way we talked about it in class …  think ahead to any problems that you could foresee, think of how the user is going to view this and incorporate that as you're building."

Terzioğlu said she appreciated a "culture of saying yes" among her professors, who didn't discourage her from tackling complex projects and instead helped her troubleshoot through the challenges.

Terzioğlu (MA 2023) initially joined Gannett after graduating to help the organization restart its news automation unit. Using natural language generation, Terzioğlu created workflows that generated automated stories for small newsrooms — some with only one or two staff members. 

 These stories ranged from local weather forecasts to highlighting new rental data for specific communities

Terzioğlu now maintains and manages the workflows she created and helps develop new tools, such as automated tipsheets and alerts on federal layoff data for reporters.

Terzioğlu said she constantly thinks about the ethical implications of automating stories. Gannett has an AI council that reviews projects to ensure standards, she added.

"I think about the ethics of this every day, honestly," she said. "We're not just sending stories to print without people looking at them. I know the local editors are … integrated into my workflow, so my stories cannot publish without someone looking at them."

More News Topics

More News