Shangjie Automotive moved quickly to rebut social‑media accusations that the winter‑testing images it released of its upcoming Z7 crossover were generated by artificial intelligence. In a February 9 statement on its official WeChat account the company said the photographs are genuine, taken during cold‑weather trials, and that visual obfuscation — body camouflage and blurred close‑ups — was applied to protect confidential design details.
The short clarification is a reminder of two converging pressures on automakers today: the marketing imperative to show progress on new models and the growing skepticism over imagery in an age of photorealistic AI. Enthusiasts and commentators had raised questions after the Z7 pictures circulated online; Shangjie’s reply sought to close off allegations of fakery while signalling continued work on the vehicle.
Winter testing is routine in the automotive industry. Manufacturers expose prototypes to subzero temperatures to validate batteries, software and mechanical systems in extreme conditions, typically in remote northern regions or Scandinavia. But those tests also generate material — spy photos, official teasers and staged shots — that can feed both legitimate interest and conspiratorial reading when the provenance of images is unclear.
For Chinese‑market startups such as Shangjie, the stakes are practical and reputational. A credible leak or staged reveal can build early buzz for a new model; a perceived deception can quickly erode trust among consumers, media and investors. The company’s explanation that camouflage and selective blurring were used for "product design secrecy" is standard practice, but in the current media environment it also invites further scrutiny about why full, verifiable imagery was not released.
The episode sits at the intersection of two broader trends: the rapid improvement of generative AI tools that can fabricate near‑perfect photos, and the intensifying competition in China’s electric‑vehicle sector, where startups vie for attention. Both dynamics increase the cost of missteps. As AI makes it easier to fake visual evidence, auto makers will need clearer standards for demonstrating authenticity — from releasing higher‑resolution raw images and metadata to time‑stamped videos and independent third‑party confirmation.
Beyond PR mechanics, regulators and platforms may be drawn into the debate. Chinese authorities have already shown sensitivity to misinformation online, and tech platforms face pressure to police doctored content. For automakers, the cheapest path is transparency: leaner secrecy on innocuous angles, better contextual communication about testing, and rapid, evidence‑backed rebuttals when doubts arise.
Shangjie’s brief denial has for now contained the controversy, but it also exposes a vulnerability common to emerging carmakers. In a market where consumers shop on reputation as much as specs, the ability to prove progress — and to do so in a way that courts credibility rather than suspicion — will become part of the competitive toolkit.
