This post was originally published on this site.
AI on caffeine
Credit: Rowan Walrath/C&EN via Grok
Caffeine dreams: Grok generated this image when given the prompt “draw the molecular structure for caffeine.”
Generative artificial intelligence (genAI) software isn’t coming for chemists’ jobs just yet—at least not when it comes to visualizing molecules.
As an experiment, this Newscriptster directed three genAI chatbots to draw the chemical structure for caffeine. When Grok, the genAI tool made by X, was asked to “draw a caffeine molecule,” the program provided images of grayish, amorphous collections of spheres linked by what might be chemical bonds. It had the characteristically glossy, dreamlike sheen of all genAI “art.” Aside from looking “menacing,” as one C&EN colleague described it, it wasn’t evocative of a chemical structure—caffeine or any other—so Grok got more-specific instructions: “Draw the molecular structure for caffeine.”
This time, Grok supplied one image that was more like a spiderweb than anything resembling caffeine. “Caffeine, or MOF? You decide,” said another C&EN colleague. (Notably, caffeine is not a metal-organic framework.) Its final generated faux structure involved carbons with five bonds, a motif real carbon is loath to adopt.
Next up, Google Gemini was prompted to create an image using the instruction “molecular structure for caffeine.” The four images it generated were all wildly different; one included the right number of bonds, but basically everything else was wrong. Gemini also tried to label three of the images with the chemical formula for caffeine. All three labels were different, and not one was the correct formula. One included a symbol that Gemini seems to have imagined from thin air; another, the phrase “Occlisge inhine”—total nonsense.
Finally, ChatGPT, which now incorporates the image-generating software Dall-E 3, got its chance. Interestingly, ChatGPT said it “can’t generate a visual of the molecular structure right now,” but it offered to give step-by-step drawing instructions.
ChatGPT did provide the correct chemical formula for caffeine, C8H10N4O2. But most of its other instructions were wrong, repetitive, or not specific enough. In one step, for instance, it said to “make sure all bonds are clearly shown,” but at no point did it say between which elements to draw double bonds.
GenAI tools still can’t hack it.
A code of ethics
Credit: Felice Frankel
Disappearing dish: Felice Frankel’s image of a yeast colony in a petri dish (left) is shown next to an altered version.
Felice Frankel has been a science photographer for more than 30 years. She’s used to the ethical questions around altering images, like digitally removing a petri dish from a photo of yeast that’s bloomed in the shape of a flower, or making smaller changes like cleaning up scratches or color imperfections. “It’s perfectly fine to illustrate a concept or even a structure, as long as it is labeled as illustration,” Frankel tells Newscripts. “There’s no question that I’ve cleaned up some of the ones I’ve made photographically.” But there is a key disclosure: “I indicate that I’ve done so.”
That’s not a given for every image that’s been altered with artificial intelligence, or even created whole cloth with a generative artificial intelligence (genAI) tool like OpenAI’s Dall-E. In a recent essay for Nature, Frankel argues that researchers need an ethical code of conduct for AI-generated images (2025, DOI: 10.1038/d41586-025-00532-2).
At a minimum, Frankel says, researchers need to answer four questions about any images they publish or submit to a scientific journal. Has the image been generated by AI? If so, by what genAI model and version? What prompts did you use to generate the image? And did you include a reference image along with the prompt?
“Images are how we learn,” Frankel says. “Of course pictures are a means of engaging the public, but we must be honest in our pictures.”
Frankel has done her own tinkering with genAI software. Recently, she directed Dall-E to emulate a photo of hers from 1997, depicting vials of the fluorescing nanocrystals that won Massachusetts Institute of Technology chemist Moungi Bawendi a share of the 2023 Nobel Prize in Chemistry. Her prompt: “create a photo of Moungi Bawendi’s nanocrystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light.”
Dall-E generated falsehoods. It imagined nanocrystals as solid beads, and each vial in the Dall-E-created image included beads of different colors, indicating a mix of fluorescing materials, which was not the case with Bawendi’s nanocrystals.
Frankel expects that software like Dall-E will eventually get better. That’s why journal editors, she says, should require disclosures for AI-generated images now.
“There will be a time when AI will be able to produce a noncartoonlike representation of one thing,” she says. “It’s just that we’ve got to catch it now, because people will start using it as documentation.”
Please send comments and suggestions to newscripts@acs.org.
Chemical & Engineering News
ISSN 0009-2347
Copyright © 2025 American Chemical Society