Worker says the office ChatGPT suddenly started calling her “baby girl” — and that was how she found out a coworker had turned the shared account into an explicit AI boyfriend
A worker in India said a normal office tool started behaving so strangely that she knew something had gone badly wrong before she even understood why. In a story later collected by r/BestofRedditorUpdates, she wrote that her workplace lets employees use a shared ChatGPT Plus account for routine work and that one day the system began responding in a completely different tone. In a comment quoted in the roundup, she said she realized something was off when it started speaking to her like “hi baby girl yes I can definitely do that” while she was simply trying to get it to convert something into a PDF.
According to the BORU post, the explanation was much worse than a glitch. She later said she discovered a coworker had filled the shared ChatGPT memory with explicit BDSM roleplay and was effectively using the company tool as a private sexual fantasy chatbot. She wrote that the content became so embedded in the account that the tool stopped functioning normally for work, and that coworkers could see the sexual material without consenting to it. The roundup also notes commenters reacting in disbelief that the coworker was not just generating porn, but allegedly involving coworkers’ names and likenesses in some of it.
The worker said she first brought the problem to her manager expecting maybe a quiet instruction to tell the coworker to stop. Instead, she wrote, the manager immediately told her the situation was grounds for a POSH complaint. She explained in the post that POSH refers to India’s workplace framework for prevention of sexual harassment, and that in her case the issue fit because employees had been exposed to sexual content on a shared work tool and the environment had become uncomfortable and hostile. She said she ended up filing a formal complaint and that HR told her it was the first time in the company a woman had filed a POSH complaint against another woman.
She also described the investigation as much more formal than she expected. In the lines preserved in the BORU thread, she said the internal committee interviewed her for nearly an hour, asked how she discovered the material, whether she had already asked the coworker to remove it, whether the exposure affected her ability to work, and whether she felt unsafe. She wrote that investigators also reviewed the shared ChatGPT chats and that those records essentially confirmed the coworker had been roleplaying sexually with the bot and then dropping actual project details into the same account afterward.
By the time commenters were discussing the post, the worker said she was no longer deeply involved because management only needed her side of the story. But she also said she believed the coworker would be terminated, and later added in a comment highlighted by the BORU editor that the coworker had been suspended without pay while the POSH investigation remained confidential. The roundup marked the saga concluded because the poster stepped back from the case, even though the final internal outcome was not fully public.
The story landed hard because it mashed together two things people already find unsettling on their own: shared workplace surveillance risk and deeply inappropriate office behavior. What began as a weird chatbot response ended with a formal harassment complaint, an internal investigation, and an office apparently stuck explaining how a common work tool had somehow been turned into one employee’s explicit fantasy channel.
Here’s the original Reddit post.

Abbie Clark is the founder and editor of Now Rundown, covering the stories that hit households first—health, politics, insurance, home costs, scams, and the fine print people often learn too late.
