How to Pass Salesforce Certified AI Associate Certification Exam

Advertisements

Last Updated on September 7, 2023 by Rakesh Gupta

As a newly minted Salesforce Certified AI Associate, I am sharing my study experiences with you and want you to be the next one to ace it! So, get ready and dive in!

👉 As you are here, you may want to check out the How to Pass Salesforce Certified Associate Certification Exam article.

A New Credential for AI Trailblazers

The Salesforce AI Associate credential is designed for individuals who may have knowledge of AI, whether they are beginners or individuals with more experience. The The Salesforce Certified AI Associate should be able to provide informed strategies and guide stakeholder decisions based on Salesforce’s Trusted AI Principles. Candidates should be familiar with data management, security considerations, common business and productivity tools, and Salesforce Customer 360.

While preparing for the certification exam, trailblazers will go through the key topics which will help them to achieve their goals even faster: 

  1. Explain the basic principles and applications of AI within Salesforce.
  2. Differentiate between the types of AI and their capabilities.
  3. Identify CRM AI capabilities.
  4. Describe the benefits of AI as they apply to CRM.
  5. Describe the ethical challenges of AI 
  6. Apply Salesforce’s Trusted AI Principles to given scenarios.
  7. Describe the importance of data quality.
  8. Describe the elements/components of data quality.
  9. And much more

So, Who is an Ideal Candidate for the Exam?

Salesforce Certified AI Associate candidates should have a foundational knowledge of Salesforce’s core capabilities and should be able to navigate Salesforce. The Salesforce Certified AI Associate exam is for individuals who want to demonstrate knowledge, skills, or experience in the following areas:

  • AI basics, its different types such as predictive analytics, machine learning, NLP, and computer vision.
  • Salesforce’s trusted AI principles, particularly in the context of CRM systems like Salesforce and its suite of products.
  • The role of data quality, data preparation/cleansing, and data governance in training and fine-tuning AI models.
  • Ethical and responsible handling of data, including privacy, bias, security, and compliance considerations.
  • Ability to engage in meaningful discussions with stakeholders on how AI can be used to improve their business and differing scenarios, including identifying opportunities for AI-driven improvements and potential challenges.

How to prepare for the exam?

Learning styles differ widely – so there is no magic formula that one can follow to clear an exam. The best practice is to study for a few hours daily – rain or shine! Below are some details about the exam and study materials:

  • 40 multiple-choice/multiple-select questions – 70 mins
  • 65% is the passing score
  • Exam Sections and Weighting
    • AI Fundamentals: 17%
    • AI Capabilities in CRM: 8%
    • Ethical Considerations of AI: 39%
    • Data for AI: 36%
  • The exam Fee is $75 plus applicable taxes
  • Retake fee: Free
  • Schedule your certification exam here

The following list is not exhaustive; so check it out and use it as a starting point:

  1. Salesforce Certified AI Associate FAQ
  2. Salesforce Certified AI Associate Exam Guide
  3. Trailmix: Prepare for Your Salesforce AI Associate Credential
  4. Module: Salesforce AI Associate Certification Prep

What you Need to Know to Smoothen your Journey

On a very high level, you have to understand the following topics to clear the exam. All credit goes to the Salesforce Trailhead team and their respective owners.

  1. AI Fundamentals: 17%
    1. Artificial Intelligence Crash Course YouTube Video
      1. In 20 episodes, Author will teach you about Artificial Intelligence and Machine Learning! This course is based on a university-level curriculum. By the end of the course, you will be able to:
        1. Define, differentiate, and provide examples of Artificial Intelligence and three types of Machine Learning: supervised, unsupervised, and reinforcement
        2. Understand how different AI and ML approaches can be combined to create compelling applications such as natural language processing, robotics, recommender systems, and web search
        3. Implement several types of AI to classify images, generate text from examples, play video games, and recommend content based on past preferences
        4. Understand the causes of algorithmic bias and audit datasets for several of these causes
        5. Reason about how specific advances in AI may impact our world and your life, for better or for worse
    2. What Is Artificial Intelligence? YouTube Video
    3. Introduction to Generative AI YouTube Video
    4. Introduction to large language models YouTube Video
    5. Introduction to Responsible AI YouTube Video
    6. AI vs Machine Learning YouTube Video
    7. Neural Networks and Deep Learning YouTube Video
    8. Types of AI Capabilities
      1. Numeric Predictions – Often AI predictions take the form of a value between 0 (not going to happen) to 1 (totally going to happen). Numeric predictions include more than just percent values, they can predict any numeric value, such as dollars. 
      2. Classifications – Often, AI classifiers can do the job just as good, or better, than humans. That said, each classifier is only good at one, narrow task. So the AI that’s great at detecting phishing emails would be lousy at identifying pictures of actual fish.
      3. Robotic Navigation – Some AIs excel at navigating a changing environment, and that might mean actual navigation in the case of autonomous (hands-free) driving. AI-powered cars are already quite capable of keeping centered in a lane and following at a safe distance on the highway. They adapt to curves in the road, gusts of wind from semi trucks, and sudden stops due to traffic.
      4. Language Processing – Natural language processing relies on an understanding of how words are used together, and that lets AI extract the intention behind the words. For example, you might want to translate a document from English to German. Or maybe you want a short summary of a long, scientific paper. AI can do that too.
    9. What AI can do may seem like magic. And like magic, it’s natural to want a peek behind the curtain to see how it’s all done. What you’ll find is that computer scientists and researchers are using lots of data, math, and processing power in place of mirrors and misdirection. Learning how AI actually works will help you use it to its fullest potential, while avoiding pitfalls due to its limitations.
    10. This simple set of rules for turning an input into an output is an example of an algorithm. Algorithms have been written to perform some pretty sophisticated tasks. But some tasks have so many rules (and exceptions) that it’s impossible to capture them all in a hand-crafted algorithm. Swimming is a good example of a task that is hard to encapsulate as a set of rules. You might get some advice before jumping in the pool, but you only really figure out what works once you’re trying to keep your head above water. Some things are learned best by experience.
    11. Machine learning (ML) is the process of using large amounts of data to train a model to make predictions, instead of handcrafting an algorithm.
    12. Structured vs Unstructured Data
      1. The spreadsheet is what we would call structured data. It is well organized, with labels on every column so you know the significance of every cell.
      2. Unstructured data would be something like a news article, or an unlabeled image file. The kind of data that you have available will affect what kind of training you can do.
        1. Unstructured data is used for unsupervised learning, which is when AI tries to find connections in the data without really knowing what it’s looking for.
    13. A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain.
    14. Machine Learning bias, also known as algorithm bias or Artificial Intelligence bias, refers to the tendency of algorithms to reflect human biases. It is a phenomenon that arises when an algorithm delivers systematically biased results as a consequence of erroneous assumptions of the Machine Learning process.
    15. Training AI by adding extra layers to find hidden meaning in data is what’s called deep learning.
    16. Four main ingredients that are part of any good AI platform: yes-and-no predictions, numeric predictions, classifications, and recommendations. 
      1. Yes-and-No Predictions – The first ingredient is yes-and-no predictions. Yes-and-no predictions allow you to answer questions like, “Is this a good lead for my business?” or “Will this prospect open my email?” AI helps you answer these questions by scanning historical data you’ve stored in your system.
      2. Numeric Predictions – Numeric predictions often power predictive forecasting solutions (for example, “How much revenue will this new customer bring in?”), but they are also used in other contexts like customer service (for example, “How many days will it take us to resolve this customer’s issue?”). Numeric predictions also use your historical data to arrive at these numbers.
      3. Classifications – Classifications frequently use “deep learning” capabilities to operate on unstructured data like free text or images. The idea behind classification is to extract useful information from unstructured data and answer questions like, “How many soda cans are in this picture?” It can even take a statement like, “I’d like to buy another pair of the same shoes I bought last time,” and use that to kick off a workflow that can look up the last shoe order and place the same pair of shoes in their online shopping cart.
      4. Recommendations – Recommendations are key when you have a large set of items that you’d like to recommend to users. Many ecommerce websites apply recommendation strategies to products; they can detect that people who bought a specific pair of shoes also often order a certain pair of socks. When a user puts those shoes in their cart, AI automatically recommends the same socks.
    17. Getting started with AI seems hard, but breaking it into three steps makes it much easier.
      1. Decide what to predict.
      2. Get historical data in order.
      3. Turn predictions to action.
    18. Different parts of a business can use AI to improve their business outcomes.
      1. Marketing – Do you have lots of potential customers, and need help getting through to them? Marketing is a great place for AI because companies usually have lots of data that can be used to target communications and send relevant messages.
      2. Sales Productivity – AI can elevate your sales game by using historical sales data to predict the best possible sales opportunities. Imagine an inside sales rep who has a list of leads, organized by how likely they’ll convert. That rep is going to spend his time connecting with customers at the top of the list and avoiding cold leads.
      3. Customer Service – Customer service is another area where AI can help your company. Every day your company receives emails from customers looking for support. In many companies, someone has to read these emails and route them to the right people. They’re using time classifying emails that could be spent actually providing support. AI could help by reading through emails, doing the case classification based on past inquiries, and then automatically routing the emails to the right person. Cases will get into the hands of the right agent faster.
      4. Retail and Commerce – When shoppers browse online stores, they want an experience that caters directly to them. AI can meet this expectation by producing personalized recommendations for your customers. Historic data tells AI which products are frequently bought together. So if a customer chooses a product, your site can automatically show an offer for a discounted bundle, right on the product page.
    19. Brief overview of a few of the most important components of AI.
      1. Natural language understanding (NLU) refers to systems that handle communication between people and machines.
      2. Natural language processing (NLP) is distinct from NLU and describes a machine’s ability to understand what humans mean when they speak as they naturally would to another human.
      3. Named entity recognition (NER) labels sequences of words and picks out the important things like names, dates, and times. NER involves breaking apart a sentence into segments that a computer can understand and respond to quickly.
      4. Deep learning refers to artificial neural networks being developed between data points in large databases. Just like our human mind connects the dots to give us insights, deep learning uses algorithms to sift through data, draw conclusions, and enhance performance
    20. Adding einstein into our product, to make it easier for any customer of any size across any industry to deploy AI and use it in their contact center, empowering you and your agents with the predictive intelligence you need to drive increased customer satisfaction. 
      1. Increase deflection and reduce handle time. Einstein Bots can resolve routine customer requests and seamlessly hand off the customer to an agent if an issue requires a human touch.
      2. Turbocharge agent productivity. Einstein Agent gives your agents intelligent, in-context suggestions, helping them do what they do best—help your customers.
      3. Rapid deployment and time-to-value. Service Cloud Einstein is preintegrated with Salesforce and your existing service channels, and comes with an out-of-the-box, intuitive user interface.
    21. Einstein helps you deliver a transformational customer service experience, and it’s built into your existing Service Cloud deployment. By using AI and machine learning—in real time—the following features make everyone in the contact center smarter and more effective.
      1. Einstein Bots automatically resolve top customer issues, collect qualified customer information, and seamlessly hand off the customers to agents, meaning increased case deflection in the contact center and reduced handle times for agents.
      2. Einstein Agent drives agent productivity across the contact center. Through intelligent case routing, automatic triaging, and case field prediction, Einstein Agent significantly accelerates issue resolution and enhances efficiency.
      3. Einstein Discovery helps managers take action with predictive service KPIs. By serving up real-time analysis of drivers that impact KPIs, like churn or CSAT and suggested recommendations and explanations, managers are empowered to make more strategic decisions for their business.
      4. Einstein Vision for Field Service automates image classification to resolve issues faster on-site. Just by taking a picture of the object, Einstein Vision can instantly identify the part, ensuring accuracy for the technician and boosting first-time fix rates.
      5. Einstein Language brings the power of deep learning to developers. They can use pretrained models to classify text by the sentiment as either positive, neutral, or negative, and then be able to classify the underlying intent in a body of text. Put it all together, and you have the ability to process language across unstructured data in any app.
    22. A bot is simply a computer program that can carry on a conversation when a user speaks or texts with it. But chatbots are so much more than that.
      1. Chatbots are your allies in the race to resolve support issues fast. They can resolve low-touch customer requests and seamlessly hand off complex inquiries.
      2. Chatbots deflect common customer issues. They help customers self-direct immediately, and resolve common issues without waiting to “get in the queue.”
      3. Chatbots reduce chat duration (and save money). For more complex issues, CRM-connected chatbots can collect and qualify customer information and seamlessly hand it off to an agent, reducing handle time and increasing customer satisfaction.
      4. Most importantly, chatbots can be trained to understand human language—and respond intelligently—through natural language understanding (NLU).
    23. Customer service chatbots can be programed in all sorts of ways—even their tone. Specifically, bots can be programmed to come across as natural and meet basic user expectations about how conversation works. To do that, chatbots must reflect human behavior and preferences in conversation. Well-designed chatbots express the following qualities.
      1. Transparent. The chatbot should identify itself as a chatbot right up front. It should state what it can do and provide guidance via a pop-up menu of top customer requests.
      2. Personable. The chatbot should have a voice and tone that expresses the brand. This can be in the type of language or which emojis (if any) are used.
      3. Thorough. The chatbot should give the user complete information—and time to read it. Chatbots can also provide images to enhance the clarity of the information provided.
      4. Iterative. To address any issues that arise, chatbots should be continuously modified. Chatbots should improve their performance over time and not be thought of as a one-and-done kind of thing.
    24. Generative artificial intelligence (AI) is artificial intelligence capable of generating text, images, or other media, using generative models. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.
    25. Predictive AI is artificial intelligence that collects and analyzes data to predict future occurrences. Predictive AI aims to understand patterns in data and make informed predictions. It’s used in various industries such as finance to make informed financial discussions of possible expected profit and loss based on records, in healthcare to determine if a person’s health status is tilting towards an illness, and it can also be seen in fraud detection.
    26. Some AIs that perform NLP are trained on huge amounts of data, which in this case means samples of text written by real people. The internet, with its billion web pages, is a great source of sample data. Because these AI models are trained on such massive amounts of data, they’re known as large language models (LLMs).
      1.  These large language models make it possible to do some incredibly advanced language-related tasks.
        1. Summarization
        2. Translation
        3. Error correction
        4. Question answering
        5. Guided image generation
        6. Text-to-speech 
    27. Common Concerns About Generative AI
      1. Hallucinations: Remember that generative AI is really another form of prediction, and sometimes predictions are wrong. Predictions from generative AI that diverge from an expected response, grounded in facts, are known as hallucinations. They happen for a few reasons, like if the training data was incomplete or biased, or if the model was not designed well. So with any AI generated text, take the time to verify the content is factually correct.
      2. Data security: Businesses can share proprietary data at two points in the generative AI lifecycle. First, when fine-tuning a foundational model. Second, when actually using the model to process a request with sensitive data. Companies that offer AI services must demonstrate that trust is paramount and that data will always be protected.
      3. Plagiarism: LLMs and AI models for image generation are typically trained on publicly available data. There’s the possibility that the model will learn a style and replicate that style. Businesses developing foundational models must take steps to add variation into the generated content. Also, they may need to curate the training data to remove samples at the request of content creators.
      4. User spoofing: It’s easier than ever to create a believable online profile, complete with an AI generated picture. Fake users like this can interact with real users (and other fake users), in a very realistic way. That makes it hard for businesses to identify bot networks that promote their own bot content.
      5. Sustainability: The computing power required to train AI models is immense, and the processors doing the math require a lot of actual power to run. As models get bigger, so do their carbon footprints. Fortunately, once a model is trained it takes relatively little power to process requests. And, renewable energy is expanding almost as fast as AI adoption!
    28. Natural language processing (NLP), is a field of artificial intelligence (AI) that combines computer science and linguistics to give computers the ability to understand, interpret, and generate human language in a way that’s meaningful and useful to humans.
    29. Data processed from unstructured to structured is called natural language understanding (NLU).
      1. Elements of natural language in English include:
        1. Vocabulary: The words we use
        2. Grammar: The rules governing sentence structure
        3. Syntax: How words are combined to form sentences according to grammar
        4. Semantics: The meaning of words, phrases, and sentences
        5. Pragmatics: The context and intent behind cultural or geographic language use
        6. Discourse and dialogue: Units larger than a single phrase or sentence, including documents and conversations
        7. Phonetics and phonology: The sounds we make when we communicate
        8. Morphology: How parts of words can be combined or uncombined to make new words
    30. Syntactic parsing involves the analysis of words in the sentence for grammar and their arrangement in a manner that shows the relationships among the words. Syntactic parsing may include:
      1. SegmentationLarger texts are divided into smaller, meaningful chunks. Segmentation usually occurs at the end of sentences at punctuation marks to help organize text for further analysis.
      2. TokenizationSentences are split into individual words, called tokens. In the English language, tokenization is a fairly straightforward task because words are usually broken up by spaces. In languages like Thai or Chinese, tokenization is much more complicated and relies heavily on an understanding of vocabulary and morphology to accurately tokenize language.
      3. StemmingWords are reduced to their root form, or stem. For example breakingbreaks, or unbreakable are all reduced to break. Stemming helps to reduce the variations of word forms, but, depending on context, it may not lead to the most accurate stem.
      1. LemmatizationSimilar to stemming, lemmatization reduces words to their root, but takes the part of speech into account to arrive at a much more valid root word, or lemma. Part of speech tagging: Assigns grammatical labels or tags to each word based on its part of speech, such as a noun, adjective, verb, and so on. Part of speech tagging is an important function in NLP because it helps computers understand the syntax of a sentence.
      1. Named entity recognition (NER)Uses algorithms to identify and classify named entities–like people, dates, places, organizations, and so on–in text to help with tasks like answering questions and information extraction.
    31. Sentiment analysis is the process of analyzing digital text to determine if the emotional tone of the message is positive, negative, or neutral.
    32. Salesforce Execs Weigh In: What Is Generative AI?
    33. Generative AI vs. Predictive AI
    34. AI From A to Z: The Generative AI Glossary for Business Leaders
  2. AI Capabilities in CRM: 8%
    1. AI Strategy 101: Everything You Need to Know About AI + Data + CRM YouTube Video
    2. Salesforce Einstein Discovery augments your business intelligence with statistical modeling and supervised machine learning in a no-code-required, rapid-iteration environment. Einstein Discovery enables you to: 
      1. Identify, surface, and visualize insights into your business data.
      2. Predict future outcomes and suggest ways to improve predicted outcomes in your workflows.
    3. Einstein Discovery-powered solutions address these use cases:
      1. Regressions for numeric outcomes represented as quantitative data (measures), such as currency, counts, or any other quantity.
      2. Binary classification for text outcomes with only two possible results. These are typically yes or no questions that are expressed in business terms, such as churned or not churned, opportunity won or lost, employee retained or not retained, and so on.
      3. Multiclass classification for text outcomes with 3 to 10 possible results. For example, a manufacturer can predict, based on customer attributes, which of five service contracts a customer is most likely to choose.
    4. Steps to implement einstein discovery solution
      1. Target Outcome
      2. Prepare Data
      3. Create Model
      4. Evaluate Model
      5. Explore Insights
      6. Deploy Model
      7. Predict & Imporve
    5. Ideally, your dataset incudes: 
      1. Includes all the relevant factors associated with the business outcome you want to investigate and improve
      2. Omits extraneous columns that add complexity but no analytical value
      3. Contains high-quality data that is representative of the operational reality of the outcome you focus on
    6. model is a sophisticated custom equation based on a comprehensive statistical understanding of past outcomes that’s used to predict future outcomes. An Einstein Discovery model is a collection of performance metrics, settings, predictions, and data insights. Einstein Discovery walks you through the steps to create a model based on the outcome you want to improve (your model’s goal), the data you’ve assembled for that purpose (in the CRM Analytics dataset), and other settings that tell Einstein Discovery how to conduct the analysis and communicate its results.
    7. Einstein Discovery generates these kinds of insights.
      Type Description
      Descriptive Derived from historical data using descriptive analytics involving statistical analysis. Descriptive insights show what happened in your data.
      Diagnostic Derived from the model. Diagnostic insights show why it happened. Diagnostic insights drill deeper and help you understand which variables most significantly drive the business outcome you’re analyzing.
      Comparative Derived from the model. Comparative insights explain the difference in the outcome variable by comparing two specific subgroups. With comparative insights, you isolate factors (categories or buckets) and compare their impact on the outcome with other factors or with global averages. Einstein Discovery shows waterfall charts to help you visualize these comparisons.
    8. Einstein Discovery enables businesses to explore patterns, relationships, and correlations in historical data. Through the power of machine learning and artificial intelligence, Einstein can also predict future outcomes, which allow business users to prioritize their workloads and make data-driven decisions. Along with the benefits of this predictive power comes the responsibility of producing models that are ethical and accountable. Models that are built on biased historical data can lead to skewed predictions. Fortunately, Einstein Discovery helps you detect bias in your data so that you can remove its influence from your models.
    9. When working with sensitive variables, you can mark a variable in your model for bias analysis. For example, in the United States and Canada, variables related to legally protected classes, such as age, race, and gender, face restrictions on their use. In regulated industries like employment and hiring, lending, and healthcare, discrimination against these classes is illegal.
    10. Proxy values are other attributes in your dataset that are correlated with sensitive variables. Here, Account Name is a 90% proxy for postal code. From such a strong correlation, we can infer that many of the postal codes that Einstein Discovery identified as most likely to pay late were due to the postal code being associated with one Account Name.
    11. Einstein Discovery for Reports produces lightning-fast insights that are impartial, objective, and statistically meaningful. It uses colorful charts and fact-filled explanations to make it easy for you to digest and interpret the insights. Your job is simply to flip through the insights to find the ones most relevant to your business goals.
    12. Einstein Prediction Service is a public REST API service that lets you programmatically interact with Einstein Discovery–powered models and predictions. You use Einstein Prediction Service to:
      1. Get predictions on your data.
      2. Get suggested actions to take to improve predicted outcomes.
      3. Manage prediction definitions and models that are deployed in Salesforce.
      4. Manage bulk scoring jobs.
      5. Manage model refresh jobs.
    13. The predictions panel shows you the key elements returned in a prediction request.
      # Element Description
      1 Prediction Predicted outcome and a descriptive label. In this example, the opportunity is predicted to close in 29.5 days.  
      2 Top Predictors Conditions that contributed most strongly to the predicted outcome, including favorable and unfavorable contributions. In this example, the condition Competitor Type is Known and Route to Market is Reseller increases the predicted time to close by 2.02 days. The arrow to the left points up to indicate that this predictor increases the predicted outcome. The arrow is red (instead of green) to indicate that the effect of this predictor is unfavorable, because our goal is to minimize the time to close. 
      3 How to Improve This Suggested actions the user can take to improve the predicted outcome. In this example, the action of changing Supplies Group to Car Accessories reduces the time to close by 3.48 days, as indicated by the green arrow pointing down.
    14. A prediction is a derived value, produced by a model, that represents a possible future outcome based on a statistical understanding of past outcomes plus provided input values (predictors).
    15. When working with Einstein Prediction Service, it’s helpful to think of two main activities:
      1. Producing a model involves using CRM Analytics Studio to build and deploy the model in Salesforce. In order to predict customer churn, for example, someone needed to provide the model that predicts whether a customer is likely to go or stay. The next unit walks you through the steps of creating and deploying a model.
      2. Consuming a model involves using the deployed model to generate predictions and improvements for your data. Our customer churn example used a Lightning page to display prediction, top predictors, and improvements. In the last unit, you learn to get the same information using your favorite REST client and Einstein Prediction Service.
    16. You can get predictions from Einstein Prediction Service in two key ways:
      1. Declaratively in automatic prediction fields, the PREDICT function in process automation formulas, the Discovery Predict transformation in Data Prep Recipes, the Einstein Discovery action in Salesforce flows, and in Einstein Discovery in Tableau.
      2. Programmatically using APEX and REST APIs
    17.  Learn how Einstein is embedded in clouds today
      1. Sales Cloud Einstein – In Sales, the main goal is to sell, sell, sell. We know how important it is for sales reps to prioritize their day so that they can convert the most leads and focus on the right opportunities. They also have to keep in touch with their prospects and identify the best time to follow up. Productivity is their most important asset. Reps can be more productive if they know when to interact with customers with just the right offer. Here are a few things Sales Cloud Einstein can do for your sales rep.
        1. Boost win rates by prioritizing leads and opportunities most likely to convert.
        2. Discover pipeline trends and take action by analyzing sales cycles with prepackaged best practices.
        3. Maximize time spent selling by automating data capture.
        4. Generate relevant outreach automatically with CRM data.
      2. Service Cloud Einstein – The cornerstone of good customer service is making sure every customer has a stellar experience from beginning to end. In fact, customer service can be more important to the consumer than the quality or price of a product. Here are a few things Service Cloud Einstein can do for your service agents.
        1. Accelerate case resolution by automatically predicting and populating fields on incoming cases to save time and reduce repetitive tasks.
        2. Increase call deflection by resolving routine customer requests on real-time digital channels like web and mobile chat or mobile messaging.
        3. Reduce handle time by collecting and qualifying customer info for seamless agent handoff.
        4. Solve issues faster by giving your agents intelligent, in-context conversation suggestions and knowledge recommendations.
        5. Create tailored service replies, knowledge articles, and work summaries automatically with CRM data.
      3. Marketing Cloud Einstein – The goal of marketers is to understand their customers better so they can deliver the most effective, personalized campaigns. But every customer is unique, which means marketers need to know which channels customers spend the most time in, how to deliver the right content to them, and when to engage with them. Analyzing past customer behavior helps marketers predict future behavior, anticipate customer needs, and guide experiences across every touch point. Marketing Cloud Einstein can help you accomplish this.
        1. Know your audience more deeply by uncovering consumer insights and making predictions.
        2. Engage more effectively by suggesting when and on which channels to reach out to customers.
        3. Create personalized messages and content based on consumer preferences and intent.
        4. Be more productive by streamlining marketing operations.
        5. Generate subject lines and web campaigns automatically with CRM data.
      4. Commerce Cloud Einstein – You’ve probably noticed that your customers are interacting with your brand on multiple channels. Whether they’re buying online or complaining on chat, your brand needs to provide a highly personalized customer experience no matter where or how they shop. Here are a few things Commerce Cloud Einstein can do for your retailers and customers.
        1. Increase revenue by showing shoppers the best products for them, and eliminate the time-consuming activity of manually merchandising each individual page.
        2. Create highly visual dashboards to get a snapshot of your customer’s buying patterns and use these dashboards to power up your merchandising.
        3. Personalize the explicit search (search via the search box), implicit search (browsing in the storefront catalog), and category pages for every shopper, saving your customers time and bringing your business more revenue.
        4. Generate smart product descriptions automatically to increase conversions.
    18. Einstein Bots allow you to build a smart assistant into your customers’ favorite channels like chat, messaging or voice.  Einstein Bots use Natural Language Processing (NLP) to provide instant help for customers by answering common questions or gathering the right information to handoff the conversation seamlessly to the right agent for more complex questions or cases. 
    19. Einstein Prediction Builder is a simple point-click wizard that allows you to make custom predictions on your non-encrypted Salesforce data, fast. You can create predictions for any part of your business—across sales, service, marketing, commerce, IT, finance, and even HR—with clicks, not code. 
    20. Einstein Next Best Action (NBA) allows you to use rules-based and predictive models to provide anyone in your business with intelligent, contextual recommendations and offers. Actions are delivered at the moment of maximum impact—surfacing insights directly within Salesforce.
    21. Like Einstein Prediction Builder, Einstein Discovery also predicts outcomes without requiring your own data scientist.
    22. Einstein GPT allows businesses to generate personalized and relevant content by grounding large language models (LLMs) in their CRM data safely and securely. 
    23. Responsible Creation of Artificial Intelligence
    24. Einstein Bots Basics
    25. Einstein Next Best Action
    26. Sales Cloud Einstein
  3. Ethical Considerations of AI: 39%
    1. The Biggest Ethical Challenges For Artificial intelligence YouTube Video
    2. What is AI Ethics? YouTube Video
    3. Algorithmic Bias and Fairness: Crash Course AI YouTube Video
    4. Trusted AI for Enterprise YouTube Video
    5. Trusted AI Principles
      1. Responsible – We strive to safeguard human rights, to protect the data we are trusted with, observe scientific standards and enforce policies against abuse. We expect our customers to use our AI responsibly, and in compliance with their agreements with us, including our Acceptable Use Policy.
      2. Accountable – We believe in holding ourselves accountable to our customers, partners, and society. We will seek independent feedback for continuous improvement of our practice and policies and work to mitigate harm to customers and consumers.
      3. Transparent – We strive to ensure our customers understand the “why” behind each AI-driven recommendation and prediction so they can make informed decisions, identify unintended outcomes and mitigate harm.
      4. Empowering – We believe AI is best utilized when paired with human ability, augmenting people, and enabling them to make better decisions. We aspire to create technology that empowers everyone to be more productive and drive greater impact within their organizations.
      5. Inclusive – AI should improve the human condition and represent the values of all those impacted, not just the creators. We will advance diversity, promote equality, and foster equity through AI.
    6. From Principles to Practice – It’s not enough to have a set of principles. Ethics is a team sport and to be meaningful, everyone in the company must understand their responsibilities for living these principles. Below are examples of how we have translated these principles into practice.
    7. Generative AI: 5 Guidelines for Responsible Development
    8. AI Ethics Maturity Model
    9. Recommendations for Ethical Behavioral Marketing – Consumers value personalization that addresses their needs, has a clear benefit, and demonstrates genuine care. Here are a few recommendations for deploying a personalization solution that constrains the negatives while driving shared positive outcomes.
      1. Collect and Respect Preferences – Honor customer preferences and use only the data they’ve consented to sharing. Be explicit with consumers about the impact–the benefits and consequences–of their consent or lack of consent. Provide clear controls to opt-in or out.
      2. Audience Targeting – Messaging should be targeted based on consumer-expressed interests, not demographics. View consumers as they truly are: multi-dimensional individuals with many varied idiosyncratic affinities. Highly individualized messaging is more effective than targeting based only on demographic data. It’s essential to reduce biases that can distort your messaging with associations that simply don’t hold up or were never accurate in the first place.
      3. Frequency Capping – Overwhelming a customer with too many communications can subvert your brand. Frequent but unwanted messages can annoy your customers and drive them to tune you out. So, how frequently should you send messages? Once a day? Ten times a month? Is there a magic number? It’s a delicate balance to reach. Of course you want to get your customers to engage. But overexposing them to your message can have the opposite effect.
    10. Get to Know Relationship Design
    11. Learn Privacy and Data Protection Law
    12. Five guidelines Salesforce is using to guide the development of trusted generative AI, here at Salesforce and beyond.
      1. Accuracy: We need to deliver verifiable results that balance accuracy, precision, and recall in the models by enabling customers to train models on their own data. We should communicate when there is uncertainty about the veracity of the AI’s response and enable users to validate these responses. This can be done by citing sources, explainability of why the AI gave the responses it did (e.g., chain-of-thought prompts), highlighting areas to double-check (e.g., statistics, recommendations, dates), and creating guardrails that prevent some tasks from being fully automated (e.g., launch code into a production environment without a human review).
      2. Safety: As with all of our AI models, we should make every effort to mitigate bias, toxicity, and harmful output by conducting bias, explainability, and robustness assessments, and red teaming. We must also protect the privacy of any personally identifying information (PII) present in the data used for training and create guardrails to prevent additional harm (e.g., force publishing code to a sandbox rather than automatically pushing to production).
      3. Honesty: When collecting data to train and evaluate our models, we need to respect data provenance and ensure that we have consent to use data (e.g., open-source, user-provided). We must also be transparent that an AI has created content when it is autonomously delivered (e.g., chatbot response to a consumer, use of watermarks).
      4. Empowerment: There are some cases where it is best to fully automate processes but there are other cases where AI should play a supporting role to the human — or where human judgment is required. We need to identify the appropriate balance to “supercharge” human capabilities and make these solutions accessible to all (e.g., generate ALT text to accompany images).
      5. Sustainability: As we strive to create more accurate models, we should develop right-sized models where possible to reduce our carbon footprint. When it comes to AI models, larger doesn’t always mean better: In some instances, smaller, better-trained models outperform larger, more sparsely trained models.
  4. Data for AI: 36%
    1. The role of data quality in Artificial Intelligence? YouTube Video
    2. There are four main types of data analytics
      1. Descriptive – Descriptive analysis is a data analysis type that is mainly used to give you information regarding what happened. It is intended to allow you to use data collected by a system in order to help you identify what was wrong, what could be improved, or which metric is not reporting as it should. As this type of data analysis is widely used to summarize large datasets in order to describe outcomes to stakeholders
      2. Diagnostic – diagnostic analysis helps by going over the scope of just informing, but diagnosing by further investigating and correlating those KPIs in order to give you suggestions on where the issue could potentially be. 
      3. Predictive – predictive data analysis involves more complexity, because, as the name suggests, it predicts what is likely to happen in the future based on data from the past, or based on doing a data crossover between multiple datasets and sources. In a nutshell, it kind of tries to predict the future based on actions from the past.
      4. Prescriptive – prescriptive analysis is basically a sum of all the previous. Prescriptive analysis can go ahead and suggest stakeholders what are the most data-driven decisions that needs to be taken based on past events and outcomes. Prescriptive analysis highly relies on machine learning strategies in order to find patterns and their corresponding remediations by looking and crossing large datasets.
    3. Analytics helps people develop insights, and those insights help them to deal with complex problem solving. No matter if it is regarding gaming, stock markets, real-estate data, traffic information, fashion computer systems, web server or security logs, data analytics help to provide answers to complex scenarios.
    4. Factors that determine data quality
      1. Missing Records – Your company has over 500 customers in California alone, but the reports show data for only about 200 accounts in the entire western region.
      2. Duplicate Records – A quick look at a list of accounts shows that data for customers with multiple locations is captured in multiple account records. In fact, so many customers appear in so many records that you’re not even sure what defines a customer. Is it an address? A company name?
      3. No Data Standards – A regional breakdown shows customers in 87 states. Geography class was a long time ago, but you seem to remember only 50 states. For example, California is listed as: CA, Calif, Cali, and, your favorite, “Surfin’, USA.”
      4. Incomplete Records – Nearly all accounts in the western region are missing key data. Consumer accounts are missing data like phone and email. Business accounts are missing industry, revenue, and number of employees.
      5. Stale Data – At least half of all accounts in the western region haven’t been updated in the last 6 months, so you don’t know how accurate the data is. And that data doesn’t even include accounts not captured in Salesforce.
    5. Bad data is consistently linked with:
      1. Lost revenue
      2. Missing or inaccurate insights
      3. Wasted time and resources
      4. Inefficiency
      5. Slow info retrieval
      6. Poor customer service
      7. Reputational damage
      8. Decreased adoption by reps
    6. Good data lets your company:
      1. Prospect and target new customers
      2. Identify cross-sell and upsell opportunities
      3. Gain account insights
      4. Increase efficiency
      5. Retrieve the right info fast
      6. Build trust with customers
      7. Increase adoption by reps
      8. Plan and align territories better
      9. Score and route leads faster
    7. Data quality has several key attributes. It’s important to understand these dimensions before you try to fix any data problems

      Data Quality Dimension

      Description

      How to Assess It

      Age Data doesn’t age like fine wine. What was the last time each record was updated? Run a report on the Last Modified Date of records. What percentage of records have been updated recently?
      Completeness Peanut butter without the jelly? No way! Similarly, you can’t find upsell opportunities without complete company hierarchy and industry information. Are all key business fields on records filled in? List the fields required for each business use. Then run a report that shows the percentage of blanks for these fields. You can also use a data quality app from AppExchange.
      Accuracy You don’t win Olympic gold for missing the target. Is your data as accurate as possible? Has it been matched against a trusted source? Install a data quality app from AppExchange. It can match your records against a trusted source and tell you how your data can be improved.
      Consistency Is the same formatting, spelling, and language used across records? Run a report to show the values used for date, currency, state, country, region, and language fields. How many variations are used for a single value?
      Duplication Sometimes two isn’t better than one. Duplicate data often means inefficiencies. Are records and data duplicated in your org? Use the Duplicate Management features in Salesforce and install a duplicate detection app from AppExchange.
      Usage Use it or lose it! Is your data being harnessed in reports, dashboards, and apps? Review the available tools and resources your business uses. Are you optimizing data use?
    8. Data Cloud brings the power of real-time data to the Customer 360, so you can create magical experiences seamlessly.
    9. Data Cloud has built-in connectors that bring in data from any source, including Salesforce apps, mobile, web, connected devices, and even from legacy systems with MuleSoft and historical data from proprietary data lakes, in real time.
    10. Data Cloud enables any team to create magical experiences.
      Sales Every sales rep can receive real-time guidance during customer video and voice calls to adapt to the conversation and deliver personalized offers to their customers.
      Service Every service rep from the contact center to the field can provide proactive service with real-time alerts that detect challenges, enable agents to intervene, engage the customer, and resolve the issue.
      Marketing Every marketer can deliver personalized messages across channels that adapt to customer activity across various brand properties in real-time.
      Commerce Every retailer can build tailored shopper experiences that adapt to real-time customer actions, including abandoned shopping carts or actions taken on a website or mobile app.
      Platform IT teams can use low-code tools to build things like apps that leverage real-time data for example to provide fraud detection or real-time economic data to determine benefits.
      MuleSoft Every business can unlock real-time data across any modern or legacy system.
      Tableau Every business can monitor KPIs in real-time to inform action across the business, including real-time purchase data for sales, real-time case spikes for service, and real-time web traffic for marketing.
      Slack Leaders can immediately increase efficiency by enabling teams to automatically view real-time data from any channel with intelligent workflows
      Healthcare & Life Sciences Payer and provider organizations can connect clinical and non-clinical data from a variety of sources to deliver real-time intelligent insights, which can be used to build automated journeys to help patients achieve better outcomes.
      Financial Services Financial advisors and bankers can help their clients accelerate their financial goals by providing the right advice at the right time.
      AppExchange Extend the power of Data Cloud with the AppExchange Data Cloud Collection, featuring 18 Data Cloud partner apps and experts that help companies automate relevant advertising, and enrich customer profiles.
    11. Is Your Data AI-Ready?
      1. Customer data is at the heart of delivering great experiences. Your data does not need to be perfect to build an effective AI program, but it needs to be clean. That means free of errors, incorrect formats, duplicates, or mislabelings. 
      2. The data experts at Tableau offer these steps on how to clean your data, an important first step in unifying data sets for AI projects:
        1. Remove duplicate or irrelevant observations – Duplication happens when you combine data sets from multiple places, and duplicate entries are created. Irrelevant observations happen when data (say, on older consumers) doesn’t fit into a problem you’re trying to analyze (say, millennial shopping habits). Removing these makes analysis more efficient, useful, and accurate for an AI system.
        2. Fix structural errors – This happens when data includes typos, incorrect capitalization, or mislabelings. For example, “N/A” and “not applicable” mean the same thing, but are not analyzed the same way because they’re rendered differently. The entries should be consistent to ensure accurate and complete analysis by the AI system.
        3. Filter unwanted outliers – There are often one-off observations that don’t appear to align with the data you’re analyzing. That might be the result of incorrect data entry (and should be removed) but sometimes the outlier will help prove a theory you’re working on. In any case, analysis is needed to determine its validity.
        4. Handle missing dataMissing or incomplete data is a very common problem in data sets, and can reduce the accuracy of AI models. There are a few ways to deal with this:
          1. Eliminate observations that include missing values; however, this will result in lost information.
          2. Input missing values based on other observations; however, you may lose data integrity because you’re operating from assumptions and not actual observations
          3. Consider altering the way the data is used to effectively navigate the missing values.
        5. ValidateAfter cleaning the data, you should be able to answer these questions: 
          1. Does the data make sense? 
          2. Does the data follow the appropriate rules for its field? 
          3. Does it prove or disprove your theory, or surface any insight?
          4. Can you find trends that help inform the next theory? If not, is that because of continued data quality issues? 
    12. Data Management Best Practice Guide
    13. Determine Data Requirements
    14. Build Your Data Literacy
    15. Data Quality is a measurement of the degree to which data is fit for purpose. Good data quality generates trust in data. Data Quality Dimensions are a measurement of a specific attribute of a data’s quality.
      1. Completeness measures the degree to which all expected records in a dataset are present. At a data element level, completeness is the degree to which all records have data populated when expected.
      2. Validity measures the degree to which the values in a data element are valid.
      3. Uniqueness measures the degree to which the records in a dataset are not duplicated.
      4. Timeliness is the degree to which a dataset is available when expected and depends on service level agreements being set up between technical and business resources.
      5. Consistency is a data quality dimension that measures the degree to which data is the same across all instances of the data. Consistency can be measured by setting a threshold for how much difference there can be between two datasets.
      6. All records in the Customer Table must have accurate Customer Name, Customer Birthdate, and Customer Address fields when compared to the Tax Form.
    16. Data literacy is the ability to read, understand, create, and communicate data as information.
    17. Data is individual facts, statistics, or items of information. A collection of data is a collection of facts. Even more specifically, consider this expanded definition. Jeffrey Leek, a data scientist working as a professor at Johns Hopkins Bloomberg School of Public Health, started with Wikipedia’s definition of data and expanded it to form his own definition: Data is comprised of [sic] values of qualitative or quantitative variables, belonging to a set of items.
      Term Definition
      Set of items Sometimes called the population, this is the group of objects you are interested in.
      Variable A measurement, property, or characteristic of an item that may vary or change (as opposed to a constant measurement, such as pi, that does not vary).
      Qualitative variable A qualitative variable describes qualities or characteristics, such as country of origin, gender, name, or hair color.
      Quantitative variable A quantitative variable describes measurable characteristics, such as height, weight, or temperature.
    18. Some examples of raw data include:
      1. A bacteria specimen viewed under a microscope
      2. Binary files produced by measurement machines
      3. Unformatted spreadsheet files
      4. JSON data scraped from the Twitter API
      5. Numbers collected and recorded manually
    19. These are some of the traits of high-quality data.
      Traits Description
      High Volume A large amount of relevant, available data means that there’s a better chance you’ll have what you need to answer your questions.

      Note: There is no need to simply acquire data for its own sake; relevancy is important.

      Historical Data that goes back in time allows you to see how the present situation arose due to patterns that have arisen over time, such as looking at sales trends over the last 10 years to see increases or decreases.
      Consistent  As things change, data should be adjusted for consistency. Salary and price data adjusted for inflation is a good example of this.
      Multivariate Data should contain both quantitative (numerically measurable) and qualitative (characteristic, not numerically measurable) variables. The more variables in the data, the more you can discover from it.
      Atomic  The more finely detailed the data, the more you are able to examine it at various levels of detail. For example, if you wanted to understand bicycle riding trends in your state, it would be helpful to see these trends as impacted by county, city, and neighborhood.
      Clean In order for data to be useful, it should be accurate, complete, and free from errors.
      Clear Data should be written in terms that can be easily understood, not in code. For example, the housing type values single family, two-family conversion, and end-unit townhouse are much easier to understand than 1Fam, 2fmCon, and TwnhsE.
      Dimensionally Structured  An accessible way to structure data is to organize it into two types: Dimensions (qualitative values) and Measures (quantitative values). This is the organizational structure Tableau uses when interpreting data.
      Richly Segmented Groups, based on similar characteristics, should be built into data for easier analysis. For example, data about movies could be grouped by genre (action, science fiction, romance, comedy, and so on).
      Of Known Pedigree In order to trust the data, you should know its background—where it comes from and how it has since been altered.
    20. Options you can use to restructure data include:
      1. Changing the underlying database
      2. Using a programming language, such as R or Python
      3. Using tools, such as pivoting and splitting data, within the Tableau Platform, including Tableau Prep Builder or Tableau Desktop
      4. Using other ETL (Extract, Transform, Load) tools
    21. In the Well-Structured Data module you learned that data is organized into columns, or fields, and that in well-structured data fields are made up of variables, one variable per field.
      1. Qualitative variables are variables that cannot be measured numerically, such as categories or characteristics. This can be further classified into two types: nominal and ordinal.
        1. Nominal: Nominal qualitative variables are categories that cannot be ranked. For example, let’s consider a few types of fruit: bananas, grapes, apricots, and apples. These are nominal variables because there is no implied ranked order among them. A banana, for instance, is not ranked more highly than an apricot.
        2. Ordinal: In contrast to nominal qualitative variables, ordinal qualitative variables can be ranked. They are qualitative because they are not numerically measurable, but there is a logical rank-order among them. For example, think of surveys you may have taken. Examples of ordinal qualitative values on surveys are: Never, Sometimes, Mostly, Always, Extremely dissatisfied, Dissatisfied, Neither satisfied nor dissatisfied, Satisfied, Extremely satisfied.
      2. Quantitative variables are variables that can be measured numerically, such as the number of items in a set. When added to a data set, qualitative variables become qualitative fields (or columns) and quantitative variables become quantitative fields (or columns). This can be further classified into two types: discrete and continuous.
        1. Discrete Variables: Discrete variables are individually separate and distinct. Simply stated, if you can count it individually, it is a discrete variable. For example, you can count the number of children in a household individually. A household can have 0 children, 3 children, 6 children, and so on, but it can not have 3.45 children. The number of toes on a foot and the total number of socks in a drawer are also examples of discrete variables. The total number of toes on all the feet of all the people in your city is even a discrete variable. It would take a long time to individually count all those toes, but it’s still possible to do so.
        2. Continuous Variables: Continuous means forming an unbroken whole, without interruption. These are variables that cannot be counted in a finite amount of time because there is an infinite number of values between any two values. For example, if you want to measure time, every unit of time can be broken into even smaller units: The response time to a stimulus could be expressed as 1.64 seconds, or it could be further broken down and expressed as 1.642378765 seconds, and so on, infinitely. Other examples of continuous values include temperature, distance, and mass.
    22. Aggregation refers to a collection of quantitative data and can show large data trends. For example, summing all web searches for a particular campground or taking the average income of all wage earners in a city. 
    23. Granularity refers to how detailed data is. 
    24. Correlation is a technique that can show whether and how strongly pairs of quantitative variables are related.

Conclusion

If you have basic experience with all the above topics, passing the exam will be a cinch, and you will be able to earn the much-coveted Salesforce Certified AI Associate certification exam! However, if you do not have enough experience with the Basics of AI, Salesforce platform and plan to become a Certified AI Associate. I suggest you draw a 3-4 weeks plan (finish the above Trailhead to prepare for it).

I hope that you find these tips and resources useful. If you put the time and effort in, you will succeed. Happy studying and good luck!

Formative Assessment:

I want to hear from you!

Have you taken the Salesforce Certified AI Associate exam? Are you preparing for the exam now? Share your tips in the comments!

Have feedback, suggestions for posts, or need more information about Salesforce online training offered by me? Say hello, and leave a message!

Preferred Timing(required)

 

10 thoughts on “How to Pass Salesforce Certified AI Associate Certification Exam

    1. Rakesh Gupta – Mumbai – 9x Salesforce MVP | Senior Solution Architect | 8x Author | 5x Dreamforce Speaker | Salesforce Coach | Co-host of AutomationHour.com and AppXchangeHour.Com

      Congratulations, Monica!

    1. Congratulations

    1. Woohoo, congratulations 🎉

    1. Congratulations 🎉

    1. Congratulations 🎉

Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Automation Champion

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version
%%footer%%