Salesforce Artificial Intelligence Acceptable Use Policy 1. Scope A. This Artificial Intelligence Acceptable Use Policy (“Policy”) applies to customers’ use of all services offered by Salesforce, Inc. or its affiliates (“Salesforce”), or third party products, applications or functionality that interoperate with services offered by Salesforce, that incorporate artificial intelligence (collectively, “Covered AI Services”). Within the Covered AI Services, those that use Generative AI will be referred to as the “Covered Generative AI Services.” The terms of this Policy are in addition to the Acceptable Use and External Facing Services Policy at https://www.salesforce.com/company/legal/agreements/ . 2. Last Updated A. August 5, 2025 3. Changes to Policy A. Salesforce may change this Policy by posting an updated version of the Policy at http://salesforce.com and such updates will be effective upon posting. 4. Violations A. A customer’s violation of this Policy will be considered a material breach of the Main Services Agreement (“MSA”) and/or other agreement governing the customer’s use of the services. 5. Disallowed Usage: A. Customers may not use a Covered AI Service, nor allow their users or any third party to use a Covered AI Service, for the following: I. Automated Decision-Making Processes with Legal Effects a. As part of an automated decision-making process with legal or similarly significant effects, unless: i. Customer ensures that the final decision is made by a human being and takes other factors beyond the Services’ recommendation into account; and ii. Customer is transparent about the role of the Covered AI Service and the logic involved in the decision-making process, including providing subjects of the decision with the right to receive an explanation of the role of the Covered AI Service in the decision-making and the main reasons for the decision. b. As part of an automated decision-making process for payday lending even when the final decision is made by a human being. II. Individualized Advice from Licensed Professionals a. Generating individualized advice that in the ordinary course of business would be provided by a licensed professional. This includes, for example, financial and legal advice. b. Generating or providing individualized medical advice, treatment, or diagnosis to a consumer or end user. c. For clarity, this section does not limit Customer from using Covered AI Services for other purposes, such as customer support in regulated industries, or to assist a licensed professional where Covered AI Services were not leveraged in the generation of individual advice. When a Customer uses such services to assist in providing individualized advice (e.g., summarization), there must be a qualified person reviewing the output. III. Explicitly Predicting Or Categorizing Based On Protected Characteristics a. Explicitly predicting, or categorizing based on an individual’s protected characteristic, including, but not limited to, racial or ethnic origin, and past, current, or future political opinions, religious or philosophical beliefs, trade union membership, age, gender, sex life, sexual orientation, disability, health status, medical condition, financial status, criminal convictions, or likelihood to engage in criminal acts. i. The previous sentence does not limit or prohibit use cases or tools designed specifically to identify security breaches, unauthorized access, fraud, and other security vulnerabilities, or to identify and reduce bias in Salesforce AI Services. ii. Additionally, Customer may not submit images, videos or audio recordings of individuals for the purposes of creating , analyzing, or categorizing based on, biometric identifiers, such as face prints or fingerprints or scans of eyes, hands, voiceprints,or facial geometry. IV. Social Scoring and Crime Prediction a. Evaluating, classifying, scoring or rating individuals or groups based on their social behavior or personality characteristics where such scoring leads to: i. Detrimental or unfavorable treatment unrelated to the original context of the collected data; or ii. Unjustified or disproportionate treatment relative to the assessed behavior. b. Assessing or predicting the risk of an individual committing a criminal offense, based solely on profiling or assessing their personality traits and characteristics. V. Emotion and Facial Recognition a. Detecting, inferring, or assessing individuals’ emotions in the workplace or in educational institutions, except for medical or safety reasons. b. Creating or expanding facial recognition databases, e.g., through scraping of facial images from the Internet or from CCTV footage. c. Using real-time biometric recognition in public spaces for law enforcement purposes, unless an exception is expressly permitted by applicable law (e.g., to prevent a serious threat or find a missing person). VI. Deceptive Activity a. Engaging in plagiarism or academic dishonesty. b. Deploying subliminal, purposefully manipulative or deceptive techniques that impair an individual’s ability to make an informed decision. c. Exploiting vulnerabilities of individuals, e.g., due to their age, disability or specific social or economic situation. VII. Child Exploitation and Abuse a. Creating, sending, uploading, displaying, storing, processing, or transmitting material that may be harmful to minors including, but not limited to, for any purposes related to child exploitation or abuse, such as real or artificial Child Sexual Abuse Material (CSAM). B. Customers may not use a Covered Generative AI Service, nor allow its users or any third party to use any Covered Generative AI Services, for the following: I. Weapons Development a. Developing, advertising, marketing, distributing, or selling weapons, weapon accessories, or explosives, as enumerated by the United States Munitions List . II. Political Campaigns a. Targeting, creating, or distributing political campaign materials for external public or semi-public audiences. Political campaign material refers to material: i. That may influence a political process, such as an election, passage of legislation, regulation or ballot measure, judicial ruling, and content for campaigning purposes; or ii. Soliciting financial support for (i). III. Adult Content a. Creating, sending, uploading, displaying, storing, processing, or transmitting sexually explicit material; b. Creating, sending, uploading, displaying, storing, processing, or transmitting sexual chatbots or engaging in erotic chat. 6. Use Notice and Disclosures A. USE NOTICE: AI technology, including Generative AI, will continue to be used in new and innovative ways. Customer is responsible for determining if its use of these technologies is safe. B. Customers must disclose to end users when they are interacting directly with automated systems, such as Einstein bots, Agentforce Agents, or similar features including voice- or call-based bots unless there is a human in the loop, and when required by law, provide a means for end users to interact with a human instead of an automated system. C. Customers must disclose to individuals when exposing them, for the limited purpose permitted in this Policy, to an emotion recognition system or a biometric categorization system. D. Customers may not deceive end users or consumers by misrepresenting content generated through automated means as human generated or original content.