Claude Becomes the First AI Platform to Implement Face Verification
On April 14, Anthropic announced that it is gradually rolling out an identity verification mechanism for its AI product, Claude. According to the official statement, verification will occur in specific scenarios, such as when users attempt to access certain advanced features or as part of routine platform integrity checks and other security measures.
During the verification process, users must provide a government-issued photo ID (like a passport, driver’s license, or national ID) and a device with a camera, where they may be required to take a real-time selfie.
Many users in the community have expressed strong concerns about privacy risks, data sharing, and policy transparency, particularly regarding the reliability of third parties. Anthropic has chosen Persona Identities, a leading identity verification company in the U.S., as its third-party provider. Persona also provides identity verification services for major platforms like OpenAI and Discord, although it has previously been criticized for security vulnerabilities.
Stanford AI Annual Report Shows Growing Divide Between Experts and the Public
The Stanford AI Index Report, released on April 15, is one of the most authoritative references tracking AI development. The 2025 update covers various dimensions, including research and development, technical performance, responsible AI, economic impact, scientific applications, healthcare, and education.
A notable finding is the significant difference in perceptions between AI professionals and the general public. The report indicates that 73% of experts view AI’s impact on work positively, while only 23% of the public shares this view. Additionally, 69% of experts believe AI will have a positive economic impact, compared to just 21% of the public.
Other data cited from the Pew Research Center shows that AI experts are less pessimistic about AI’s impact on the job market, while nearly two-thirds of Americans (64%) believe AI will lead to job losses in the next 20 years.
Moreover, the average score of the annual ‘Foundation Model Transparency Index’ dropped sharply from 58 to 40. Out of 95 well-known models released last year, 80 did not disclose their training code.
EU Seeks Unified Social Media Ban
On April 17, French President Macron organized a meeting regarding social media restrictions, attended by leaders from Germany, Greece, Ireland, Italy, Spain, and EU Commission President Ursula von der Leyen. Macron aims to strengthen protections for children and adolescents in the digital space and enhance the obligations and responsibilities of major online platforms.
Each participating country outlined its plans, primarily focusing on restricting minors’ access to social media. For instance, Greece announced that it would ban children under 15 from using social media platforms starting January 1, 2027, due to potential impacts on children’s mental health.
The EU has also completed the development of an age verification application based on the EU digital identity wallet framework, aiming to standardize age verification methods across EU countries for law enforcement while purportedly addressing privacy concerns, allowing users to verify their age without disclosing other personal information.
E-commerce Platforms Fined 3.5 Billion Yuan, Companies and Employees Penalized
On April 17, the State Administration for Market Regulation imposed administrative penalties on seven e-commerce platforms involved in the ‘ghost delivery’ case, ordering them to rectify illegal activities, suspend new cake shop listings for 3 to 9 months, and pay a total fine of 3.597 billion yuan.
Ranked by the amount of fines, Pinduoduo was fined a total of 1.52 billion yuan, Meituan 740 million yuan, JD.com 630 million yuan, Ele.me (Taobao Flash Purchase) 550 million yuan, Douyin 56.89 million yuan, Taobao 46.97 million yuan, and Tmall 31.74 million yuan.
The penalties stemmed from the ‘one-click transfer’ behavior of cake shops that transferred orders to other food operators without informing consumers, while the e-commerce platforms had secretly signed cooperation agreements with these transfer platforms. The administrative penalty document indicates that this behavior violated the requirements for platform qualification review under the ‘Network Catering Service Food Safety Supervision and Administration Measures’ and the ‘E-commerce Law of the People’s Republic of China.’
The penalty decision for Pinduoduo also mentioned that during the investigation, the State Administration for Market Regulation issued multiple notices to provide materials and orders for rectification, but the company repeatedly refused to provide relevant materials or information without justification, or provided false materials, even resorting to violence and soft resistance to obstruct regulatory enforcement.
Notably, the 3.5 billion yuan fine is rare in the fields of e-commerce law and food safety law, with the calculation method based on individual merchant assessments and cumulative penalties. In addition to company fines, the platform’s food safety line responsible personnel were also fined nearly 20 million yuan.
AI Identifies Pirated Links, Platform Found Not Liable
The Shanghai High Court recently announced a case involving AI search and infringement of network dissemination rights. An AI company provided pirated links to a TV drama on its search platform, featuring these links prominently in search results. The copyright holder argued that the AI company had edited its algorithm to highlight clearly illegal links, thereby infringing on the copyright holder’s interests.
The court ultimately ruled that the AI company was not liable, mainly because existing evidence could not prove that the company had manually edited or recommended the search results, making it difficult to establish fault. Moreover, the company promptly addressed the infringing links once it became aware of them.
The judge highlighted two key judgments: the AI search engine is considered a network service provider, but the platform did not actively ‘recommend’ the content, thus not constituting infringement. The term ‘recommend’ here refers to a network service provider recognizing that infringing content exists on its platform and actively marking or introducing that content to attract user attention. Unlike general search engines, the search platform in this case was developed based on large language models and RAG (Retrieval-Augmented Generation) technology, and the evidence could not prove that the AI company had manually edited, selected, or recommended the search results, making it difficult to determine that the AI company had ‘knowledge’ of the infringing content and was at fault.
Ethical Safety Guidelines for AI Applications Open for Public Consultation
On April 18, the National Cybersecurity Standardization Technical Committee organized a public consultation for the technical document titled ‘Ethical Safety Guidelines for AI Applications.’
The guidelines set requirements for application developers, service providers, and users. For example, developers are required to set ethical requirements as default settings, establish risk management and traceability mechanisms, retain technical documentation and audit materials, and implement ‘black box’ mechanisms for incident tracing. Service providers must clearly indicate capability boundaries and risks, ensure that non-explainable AI in key areas is used only for auxiliary decision-making, and provide mechanisms for refusal, intervention, and cessation of use.
The guidelines also mention five key scenarios: life health and personal safety, social governance and public services, information dissemination and communication, academic knowledge production, and economic and financial activities.
The drafting institutions for the ‘Ethical Safety Guidelines for AI Applications’ include Tsinghua University, China Electronic Technology Standardization Institute, Shanghai Jiao Tong University, Alibaba Group, Huawei Technologies Co., Ltd., and Beijing Deep Exploration AI Basic Technology Research Co., Ltd. While the guidelines do not have mandatory effect, they provide standard practice guidance and are currently in the public consultation phase.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.