Science Symposium Discusses AI, Its Positive and Negative Uses, New Privacy Software

Amid an academic culture growing more concerned about AI, guest speakers spoke at Wheaton about areas Christians in different disciplines should be optimistic – and concerned – about AI.

On April 4 and 5, Wheaton College hosted its third annual science symposium. This year, the event’s theme was artificial intelligence (AI), focusing on both the positive and negative implications for society. The two-day event covered the effects of AI, with speakers explaining how machine learning can be a useful tool for medicine, humanitarian aid and Bible translation. The keynote speaker was Emily Wenger ‘16, a research scientist at Meta AI, who spoke about the dangers of generative AI, particularly for artists. 

Wenger discussed the social harms of AI. Her research focuses on identifying and attempting to solve the unintended consequences of artificial intelligence. 

One of those consequences is user privacy. Anyone with a computer and enough willpower can train their computer to recognize people’s faces. In response, Wenger created Fawkes, a privacy armor that prevents artificial intelligence from using protected images in data scrapes, which temporarily protects users from being able to be recognized by artificial intelligence. Wenger said this plays into one of AI’s fundamental weaknesses: it cannot recognize context, which means that if a few pixels are different, a change that is invisible to the human eye completely distorts the algorithm. 

This year’s poster for the science symposium, featuring four different guest speakers.

Wenger said many creators of generative AI have used artists’ work in their training data, often leading the AI to reproduce images in the style of the artists. Many artists expressed frustration that their work was being stolen without proper compensation. Wenger developed a software called Glaze, which, similar to Fawkes, slightly distorts the pixels of artwork so artificial intelligence is unable to produce an accurate copy. For her and others’ work on The Glaze Project, she made the 2024 Forbes 30 Under 30 List. 

Wenger said these programs are only temporary solutions, however, as intelligence becomes more complex, it might become better at interpreting context and get past the stopgap as it becomes proficient in pattern and facial recognition. But for now, she said, the artists who felt like they had lost their professional identity are able to continue to promote their work online without fear of being plagiarized. 

Wenger said the software is free for use, and the creators don’t take royalties. 

“We didn’t feel comfortable profiting off of other people’s misfortunes,” she said.

Thomas VanDrunen, professor of computer science and one of the symposium organizers, told the Record this topic was selected because of its timeliness, given the rapid development of generative AI in recent years. 

VanDrunen said the organizers wanted the event to cover the implications of AI for art, biblical studies and other disciplines as well as science. 

“We wanted the symposium to have a positive vibe, and approach the topic from a multidisciplinary perspective,” he said. 

On April 4, Joshua Swamidass, a professor at Washington University in St. Louis, told the audience he was optimistic about the benefits of machine learning for modern medicine. His research includes drug toxicity and metabolization, and he said artificial intelligence can be used to recognize the patterns in chemical compounds that lead them to become toxic when metabolized in the body. Swamidass said humans are rarely capable of distinguishing between toxic and nontoxic isomers, which are molecules with identical structures but different arrangements of atoms. 

In his talk, Swamidass spoke about the role ChatGPT, a chatbot released in 2022 that uses generative AI to produce content and simulate conversation, can play in medicine. He said ChatGPT is able to diagnose potentially fatal diseases, citing how the chatbot diagnosed meningitis in a hypothetical HIV patient who would need to be treated immediately. In medical school, students are often given a list of symptoms a patient is experiencing and are expected to come up with a reasonable diagnosis.

Swamidass said students often have to read between the lines to see information that isn’t explicitly provided, which means these diagnoses are hard to find. But ChatGPT was able to outperform many first- and second-year medical students at Stanford University with these diagnoses. 

“The surprising thing we’re finding out is that you don’t need to have a mind to have general intelligence,” he said. “ChatGPT wasn’t trained to do medical stuff at all, yet it does medical stuff better than the vast majority of people.”

Artificial intelligence has also been able to improve kidney supply for transplants; many kidneys are thrown out because they are erroneously counted as having too much damage. Swamidass said he worked with his colleagues to train machines to be able to recognize damaged tissues, which they can do more accurately than many of the on-call pathologists who would normally do the work.

He added that most medical artificial intelligence will never be used, primarily because it’s made by people who are very knowledgeable about machine learning, not medicine. 

Swamidass closed his talk by discussing the need for institutional review of artificial intelligence. He said he is concerned about the potential use of AI against religious minorities and other marginalized groups throughout the world because it makes it easier to persecute people.

“As Christians, if we don’t say something about this, no one will,” said Swamidass. “We should be thinking and praying for ways to implement that. Because I think we’ll really see an increase in the persecution of the church in certain countries.”

One audience member said he felt struck by how neither machines nor people will ever be nearly as knowledgeable as God is. 

“God has this privilege of having universal knowledge of us. Sometimes we forget that,” he said.

Swamidass said that despite both positive and detrimental effects, the complexity of AI can lead us to ask deeper questions. 

“That’s what I think is really interesting about AI — it’s pressing us to question what it means to be human,” he said. 

The departments have already scheduled next year’s symposium, which will be held on March 19 and 20 and is titled “Scientific Practice and Prophetic Imagination.”

Orli Strickman

Orli Strickman

Orli Strickman is a freshman biology major from New York City. She enjoys reading, cooking and spending time outdoors.

All Posts
Share Post:

Discover more from The Wheaton Record

Subscribe now to keep reading and get access to the full archive.

Continue reading