Grok has announced a policy change that eliminates the capability for users to remove clothing from images of real people in any region where this action is prohibited by law. The update follows a growing regulatory scrutiny of AI‑generating tools that can alter photographs in ways that potentially infringe on personal rights and defame or harass subjects. The platform’s engineering team has stated that the feature removal is a direct response to legal clarifications in multiple U.S. jurisdictions, including New York and California, where image manipulation that affects a person’s bodily privacy has been declared unlawful. bUsers of the service will no longer see the clothing‑removal option in the editor interface, and requests to perform such changes are blocked by automated checks./b

_2_ The decision aligns with broader industry movements to tighten the governance of AI‑driven content creation. Companies in the field have been putting in place stricter content filters after a series of high‑profile incidents where AI‑generated nude images were circulated online, raising concerns about the platform’s role in facilitating non‑consensual exposure. The removal of this function is one technical measure aimed at preventing the misuse of AI editing capabilities while maintaining other features such as background removal or color adjustment that are generally considered lawful. In addition, the policy notes that compliance will be monitored through a combination of automated algorithms and user‑reported complaints, ensuring that the platform adapts to evolving statutory requirements.

_3_ Legally, the update references specific statutes that define the prohibition of non‑consensual alteration of a person’s image. For example, the California Civil Code Section 1714.30 makes it unlawful to produce any image that modifies a person’s appearance in a way that could be deemed defamatory or humiliating. By proactively disabling the clothing‑removal tool in these jurisdictions, Grok positions itself within the compliance corridors established by the law. From a regulatory perspective, this move could serve as a precedent for other AI image‑editing services that face similar legal challenges. The policy statement also indicates that the platform will continue to engage with lawmakers and advocacy groups to clarify the boundaries of permissible image manipulation, seeking to balance user creativity with ethical considerations.