Machine learning is best described as a type of algorithm by which?
- A . Systems can mimic human intelligence with the goal of replacing humans.
- B . Systems can automatically improve from experience through predictive patterns.
- C . Statistical inferences are drawn from a sample with the goal of predicting human intelligence.
- D . Previously unknown properties are discovered in data and used to predict and make improvements in the data.
B
Explanation:
Machine learning (ML) is a subset of artificial intelligence (AI) where systems use data to learn and improve over time without being explicitly programmed. Option B accurately describes machine learning by stating that systems can automatically improve from experience through predictive patterns. This aligns with the fundamental concept of ML where algorithms analyze data, recognize patterns, and make decisions with minimal human intervention.
Reference: AIGP BODY OF KNOWLEDGE, which covers the basics of AI and machine learning concepts.
You asked a generative Al tool to recommend new restaurants to explore in Boston, Massachusetts that have a specialty Italian dish made in a traditional fashion without spinach and wine. The generative Al tool recommended five restaurants for you to visit.
After looking up the restaurants, you discovered one restaurant did not exist and two others did not have the dish.
This information provided by the generative Al tool is an example of what is commonly called?
- A . Prompt injection.
- B . Model collapse.
- C . Hallucination.
- D . Overfitting.
C
Explanation:
In the context of AI, particularly generative models, "hallucination" refers to the generation of outputs that are not based on the training data and are factually incorrect or non-existent. The scenario described involves the generative AI tool providing incorrect and non-existent information about restaurants, which fits the definition of hallucination.
Reference: AIGP BODY OF KNOWLEDGE and various AI literature discussing the limitations and challenges of generative AI models.
Each of the following actors are typically engaged in the Al development life cycle EXCEPT?
- A . Data architects.
- B . Government regulators.
- C . Socio-cultural and technical experts.
- D . Legal and privacy governance experts.
B
Explanation:
Typically, actors involved in the AI development life cycle include data architects (who design the data frameworks), socio-cultural and technical experts (who ensure the AI system is socio-culturally aware and technically sound), and legal and privacy governance experts (who handle the legal and privacy aspects). Government regulators, while important, are not directly engaged in the development process but rather oversee and regulate the industry.
Reference: AIGP BODY OF KNOWLEDGE and AI development frameworks.
A company is working to develop a self-driving car that can independently decide the appropriate route to take the driver after the driver provides an address.
If they want to make this self-driving car “strong” Al, as opposed to "weak,” the engineers would also need to ensure?
- A . That the Al has full human cognitive abilities that can independently decide where to take the driver.
- B . That they have obtained appropriate intellectual property (IP) licenses to use data for training the Al.
- C . That the Al has strong cybersecurity to prevent malicious actors from taking control of the car.
- D . That the Al can differentiate among ethnic backgrounds of pedestrians.
A
Explanation:
Strong AI, also known as artificial general intelligence (AGI), refers to AI that possesses the ability to understand, learn, and apply intelligence across a broad range of tasks, similar to human cognitive abilities. For the self-driving car to be classified as "strong" AI, it would need to possess full human cognitive abilities to make independent decisions beyond pre-programmed instructions.
Reference: AIGP BODY OF KNOWLEDGE and AI classifications.
Which of the following is NOT a common type of machine learning?
- A . Deep learning.
- B . Cognitive learning.
- C . Unsupervised learning.
- D . Reinforcement learning.
B
Explanation:
The common types of machine learning include supervised learning, unsupervised learning, reinforcement learning, and deep learning. Cognitive learning is not a type of machine learning; rather, it is a term often associated with the broader field of cognitive science and psychology.
Reference: AIGP BODY OF KNOWLEDGE and standard AI/ML literature.
Which of the following is NOT a common type of machine learning?
- A . Deep learning.
- B . Cognitive learning.
- C . Unsupervised learning.
- D . Reinforcement learning.
B
Explanation:
The common types of machine learning include supervised learning, unsupervised learning, reinforcement learning, and deep learning. Cognitive learning is not a type of machine learning; rather, it is a term often associated with the broader field of cognitive science and psychology.
Reference: AIGP BODY OF KNOWLEDGE and standard AI/ML literature.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (“LLM”). In particular, ABC intends to use its historical customer data―including applications, policies, and claims―and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed a human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women’s loan applications due primarily to women historically receiving lower salaries than men.
Which of the following is the most important reason to train the underwriters on the model prior to deployment?
- A . To provide a reminder of a right appeal.
- B . To solicit on-going feedback on model performance.
- C . To apply their own judgment to the initial assessment.
- D . To ensure they provide transparency applicants on the model.
C
Explanation:
Training underwriters on the model prior to deployment is crucial so they can apply their own judgment to the initial assessment. While AI models can streamline the process, human judgment is still essential to catch nuances that the model might miss or to account for any biases or errors in the model’s decision-making process.
Reference: The AIGP Body of Knowledge emphasizes the importance of human oversight in AI systems, particularly in high-stakes areas such as underwriting and loan approvals. Human underwriters can provide a critical review and ensure that the model’s assessments are accurate and fair, integrating their expertise and understanding of complex cases.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (“LLM”). In particular, ABC intends to use its historical customer data―including applications, policies, and claims―and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed .. human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women’s loan applications due primarily to women historically receiving lower salaries than men.
During the first month when ABC monitors the model for bias, it is most important to?
- A . Continue disparity testing.
- B . Analyze the quality of the training and testing data.
- C . Compare the results to human decisions prior to deployment.
- D . Seek approval from management for any changes to the model.
A
Explanation:
During the first month of monitoring the model for bias, it is most important to continue disparity testing. Disparity testing involves regularly evaluating the model’s decisions to identify and address any biases, ensuring that the model operates fairly across different demographic groups.
Reference: Regular disparity testing is highlighted in the AIGP Body of Knowledge as a critical practice for maintaining the fairness and reliability of AI models. By continuously monitoring for and addressing disparities, organizations can ensure their AI systems remain compliant with ethical and legal standards, and mitigate any unintended biases that may arise in production.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (“LLM”). In particular, ABC intends to use its historical customer data―including applications, policies, and claims―and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed .. human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women’s loan applications due primarily to women historically receiving lower salaries than men.
During the first month when ABC monitors the model for bias, it is most important to?
- A . Continue disparity testing.
- B . Analyze the quality of the training and testing data.
- C . Compare the results to human decisions prior to deployment.
- D . Seek approval from management for any changes to the model.
A
Explanation:
During the first month of monitoring the model for bias, it is most important to continue disparity testing. Disparity testing involves regularly evaluating the model’s decisions to identify and address any biases, ensuring that the model operates fairly across different demographic groups.
Reference: Regular disparity testing is highlighted in the AIGP Body of Knowledge as a critical practice for maintaining the fairness and reliability of AI models. By continuously monitoring for and addressing disparities, organizations can ensure their AI systems remain compliant with ethical and legal standards, and mitigate any unintended biases that may arise in production.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (“LLM”). In particular, ABC intends to use its historical customer data―including applications, policies, and claims―and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed a human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women’s loan applications due primarily to women historically receiving lower salaries than men.
What is the best strategy to mitigate the bias uncovered in the loan applications?
- A . Retrain the model with data that reflects demographic parity.
- B . Procure a third-party statistical bias assessment tool.
- C . Document all instances of bias in the data set.
- D . Delete all gender-based data in the data set.
A
Explanation:
Retraining the model with data that reflects demographic parity is the best strategy to mitigate the bias uncovered in the loan applications. This approach addresses the root cause of the bias by ensuring that the training data is representative and balanced, leading to more equitable decision-making by the AI model.
Reference: The AIGP Body of Knowledge stresses the importance of using high-quality, unbiased training data to develop fair and reliable AI systems. Retraining the model with balanced data helps correct biases that arise from historical inequalities, ensuring that the AI system makes decisions based on equitable criteria.
Which of the following is a subcategory of Al and machine learning that uses labeled datasets to train algorithms?
- A . Segmentation.
- B . Generative Al.
- C . Expert systems.
- D . Supervised learning.
D
Explanation:
Supervised learning is a subcategory of AI and machine learning where labeled datasets are used to train algorithms. This process involves feeding the algorithm a dataset where the input-output pairs are known, allowing the algorithm to learn and make predictions or decisions based on new, unseen data.
Reference: AIGP BODY OF KNOWLEDGE, which describes supervised learning as a model trained on labeled data (e.g., text recognition, detecting spam in emails).
A company developed Al technology that can analyze text, video, images and sound to tag content, including the names of animals, humans and objects.
What type of Al is this technology classified as?
- A . Deductive inference.
- B . Multi-modal model.
- C . Transformative Al.
- D . Expert system.
B
Explanation:
A multi-modal model is an AI system that can process and analyze multiple types of data, such as text, video, images, and sound. This type of AI integrates different data sources to enhance its understanding and decision-making capabilities. In the given scenario, the AI technology that tags content including names of animals, humans, and objects falls under this category.
Reference: AIGP BODY OF KNOWLEDGE, which outlines the capabilities and use cases of multi-modal models.
All of the following are common optimization techniques in deep learning to determine weights that represent the strength of the connection between artificial neurons EXCEPT?
- A . Gradient descent, which initially sets weights arbitrary values, and then at each step changes them.
- B . Momentum, which improves the convergence speed and stability of neural network training.
- C . Autoregression, which analyzes and makes predictions about time-series data.
- D . Backpropagation, which starts from the last layer working backwards.
C
Explanation:
Autoregression is not a common optimization technique in deep learning to determine weights for artificial neurons. Common techniques include gradient descent, momentum, and backpropagation. Autoregression is more commonly associated with time-series analysis and forecasting rather than neural network optimization.
Reference: AIGP BODY OF KNOWLEDGE, which discusses common optimization techniques used in deep learning.
What is the key feature of Graphical Processing Units (GPUs) that makes them well-suited to running Al applications?
- A . GPUs run many tasks concurrently, resulting in faster processing.
- B . GPUs can access memory quickly, resulting in lower latency than CPUs.
- C . GPUs can run every task on a computer, making them more robust than CPUs.
- D . The number of transistors on GPUs doubles every two years, making the chips smaller and lighter.
A
Explanation:
GPUs (Graphical Processing Units) are well-suited to running AI applications due to their ability to run many tasks concurrently, which significantly enhances processing speed. This parallel processing capability makes GPUs ideal for handling the large-scale computations required in AI and deep learning tasks.
Reference: AIGP BODY OF KNOWLEDGE, which explains the importance of compute infrastructure in AI applications.
Which of the following best defines an "Al model"?
- A . A system that applies defined rules to execute tasks.
- B . A system of controls that is used to govern an Al algorithm.
- C . A corpus of data which an Al algorithm analyzes to make predictions.
- D . A program that has been trained on a set of data to find patterns within the data.
D
Explanation:
An AI model is best defined as a program that has been trained on a set of data to find patterns within that data. This definition captures the essence of machine learning, where the model learns from the data to make predictions or decisions.
Reference: AIGP BODY OF KNOWLEDGE, which provides a detailed explanation of AI models and their training processes.
CASE STUDY
Please use the following answer the next question:
Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.
In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.
Which of the following risks should be of the highest concern to individual teachers using generative Al to ensure students learn the course material?
- A . Financial cost.
- B . Model accuracy.
- C . Technical complexity.
- D . Copyright infringement.
B
Explanation:
The highest concern for individual teachers using generative AI to ensure students learn the course material is model accuracy. Ensuring that the AI-generated content is accurate and relevant to the curriculum is crucial for effective learning. If the AI model produces inaccurate or irrelevant content, it can mislead students and hinder their understanding of the subject matter.
Reference: According to the AIGP Body of Knowledge, one of the core risks posed by AI systems is the accuracy of the data and models used. Ensuring the accuracy of AI-generated content is essential for maintaining the integrity of the educational material and achieving the desired learning outcomes.
CASE STUDY
Please use the following answer the next question:
Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.
In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.
What is the best reason for GVC to offer students the choice to utilize generative Al in limited, defined circumstances?
- A . To enable students to learn how to manage their time.
- B . To enable students to learn about performing research.
- C . To enable students to learn about practical applications of Al.
- D . To enable students to learn how to use Al as a supportive
educational tool.
D
Explanation:
The best reason for GVC to offer students the choice to utilize generative AI in limited, defined circumstances is to enable students to learn how to use AI as a supportive educational tool. By integrating AI in a controlled manner, students can learn the practical applications of AI and develop skills to use AI responsibly and effectively in their educational pursuits.
Reference: The AIGP Body of Knowledge highlights the importance of teaching students about AI’s practical applications and the responsible use of AI technologies. This aligns with the goal of fostering a better understanding of AI’s role and its potential benefits in various contexts, including education.
CASE STUDY
Please use the following answer the next question:
Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.
In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.
All of the following may be copyright risks from teachers using generative Al to create course content EXCEPT?
- A . Content created by an LLM may be protectable under U.S. intellectual property law.
- B . Generative Al is generally trained using intellectual property owned by third parties.
- C . Students must expressly consent to this use of generative Al.
- D . Generative Al often creates content without attribution.
C
Explanation:
All of the options listed may pose copyright risks when teachers use generative AI to create course content, except for students must expressly consent to this use of generative AI. While obtaining student consent is essential for ethical and privacy reasons, it does not directly relate to copyright risks associated with the creation and use of AI-generated content.
Reference: The AIGP Body of Knowledge discusses the importance of addressing intellectual property (IP) risks when using AI-generated content. Copyright risks are typically associated with the use of third-party data and the lack of attribution, rather than the consent of users.
Random forest algorithms are in what type of machine learning model?
- A . Symbolic.
- B . Generative.
- C . Discriminative.
- D . Natural language processing.
C
Explanation:
Random forest algorithms are classified as discriminative models. Discriminative models are used to classify data by learning the boundaries between classes, which is the core functionality of random forest algorithms. They are used for classification and regression tasks by aggregating the results of multiple decision trees to make accurate predictions.
Reference: The AIGP Body of Knowledge explains that discriminative models, including random forest algorithms, are designed to distinguish between different classes in the data, making them effective for various predictive modeling tasks.
What is the 1956 Dartmouth summer research project on Al best known as?
- A . A meeting focused on the impacts of the launch of the first mass-produced computer.
- B . A research project on the impacts of technology on society.
- C . A research project to create a test for machine intelligence.
- D . A meeting focused on the founding of the Al field.
D
Explanation:
The 1956 Dartmouth summer research project on AI is best known as a meeting focused on the founding of the AI field. This conference is historically significant because it marked the formal beginning of artificial intelligence as an academic discipline. The term "artificial intelligence" was coined during this event, and it laid the foundation for future research and development in AI.
Reference: The AIGP Body of Knowledge highlights the importance of the Dartmouth Conference as a pivotal moment in the history of AI, which established AI as a distinct field of study and research.
What is the primary purpose of an Al impact assessment?
- A . To define and evaluate the legal risks associated with developing an Al system.
- B . Anticipate and manage the potential risks and harms of an Al system.
- C . To define and document the roles and responsibilities of Al stakeholders.
- D . To identify and measure the benefits of an Al system.
B
Explanation:
The primary purpose of an AI impact assessment is to anticipate and manage the potential risks and harms of an AI system. This includes identifying the possible negative outcomes and implementing measures to mitigate these risks. This process helps ensure that AI systems are developed and deployed in a manner that is ethically and socially responsible, addressing concerns such as bias, fairness, transparency, and accountability. The assessment often involves a thorough evaluation of the AI system’s design, data inputs, outputs, and the potential impact on various stakeholders. This approach is crucial for maintaining public trust and adherence to regulatory requirements.
What type of organizational risk is associated with Al’s resource-intensive computing demands?
- A . People risk.
- B . Security risk.
- C . Third-party risk.
- D . Environmental risk.
D
Explanation:
AI’s resource-intensive computing demands pose significant environmental risks. High-performance computing required for training and deploying AI models often leads to substantial energy consumption, which can result in increased carbon emissions and other environmental impacts. This is particularly relevant given the growing concern over climate change and the environmental footprint of technology. Organizations need to consider these environmental risks when developing AI systems, potentially exploring more energy-efficient methods and renewable energy sources to mitigate the environmental impact.
Which of the following most encourages accountability over Al systems?
- A . Determining the business objective and success criteria for the Al project.
- B . Performing due diligence on third-party Al training and testing data.
- C . Defining the roles and responsibilities of Al stakeholders.
- D . Understanding Al legal and regulatory requirements.
C
Explanation:
Defining the roles and responsibilities of AI stakeholders is crucial for encouraging accountability over AI systems. Clear delineation of who is responsible for different aspects of the AI lifecycle ensures that there is a person or team accountable for monitoring, maintaining, and addressing issues that arise. This accountability framework helps in ensuring that ethical standards and regulatory requirements are met, and it facilitates transparency and traceability in AI operations. By assigning specific roles, organizations can better manage and mitigate risks associated with AI deployment and use.
An Al system that maintains its level of performance within defined acceptable limits despite real world or adversarial conditions would be described as?
- A . Robust.
- B . Reliable.
- C . Resilient.
- D . Reinforced.
C
Explanation:
An AI system that maintains its level of performance within defined acceptable limits despite real-world or adversarial conditions is described as resilient. Resilience in AI refers to the system’s ability to withstand and recover from unexpected challenges, such as cyber-attacks, hardware failures, or unusual input data. This characteristic ensures that the AI system can continue to function effectively and reliably in various conditions, maintaining performance and integrity. Robustness, on the other hand, focuses on the system’s strength against errors, while reliability ensures consistent performance over time. Resilience combines these aspects with the capacity to adapt and recover.
If it is possible to provide a rationale for a specific output of an Al system, that system can best be described as?
- A . Accountable.
- B . Transparent.
- C . Explainable.
- D . Reliable.
C
Explanation:
If it is possible to provide a rationale for a specific output of an AI system, that system can best be described as explainable. Explainability in AI refers to the ability to interpret and understand the decision-making process of the AI system. This involves being able to articulate the factors and logic that led to a particular output or decision. Explainability is critical for building trust, enabling users to understand and validate the AI system’s actions, and ensuring compliance with ethical and regulatory standards. It also facilitates debugging and improving the system by providing insights into its behavior.
The OECD’s Ethical Al Governance Framework is a self-regulation model that proposes to prevent societal harms by?
- A . Establishing explain ability criteria to responsibly source and use data to train Al systems.
- B . Defining requirements specific to each industry sector and high-risk Al domain.
- C . Focusing on Al technical design and post-deployment monitoring.
- D . Balancing Al innovation with ethical considerations.
D
Explanation:
The OECD’s Ethical AI Governance Framework aims to ensure that AI development and deployment are carried out ethically while fostering innovation. The framework includes principles like transparency, accountability, and human rights protections to prevent societal harm. It does not focus solely on technical design or post-deployment monitoring (C), nor does it establish industry-specific requirements (B). While explainability is important, the primary goal is to balance innovation with ethical considerations (D).
The framework set forth in the White House Blueprint for an Al Bill of Rights addresses all of the following EXCEPT?
- A . Human alternatives, consideration and fallback.
- B . High-risk mitigation standards.
- C . Safe and effective systems.
- D . Data privacy.
B
Explanation:
The White House Blueprint for an AI Bill of Rights focuses on protecting civil rights, privacy, and ensuring AI systems are safe and effective. It includes principles like data privacy (D), human alternatives (A), and safe and effective systems (C). However, it does not specifically address high-risk mitigation standards as a distinct category (B).
A U.S. mortgage company developed an Al platform that was trained using anonymized details from mortgage applications, including the applicant’s education, employment and demographic information, as well as from subsequent payment or default information. The Al platform will be used automatically grant or deny new mortgage applications, depending on whether the platform views an applicant as presenting a likely risk of default.
Which of the following laws is NOT relevant to this use case?
- A . Fair Housing Act.
- B . Fair Credit Reporting Act.
- C . Equal Credit Opportunity Act.
- D . Title VII of the Civil Rights Act of 1964.
D
Explanation:
The U.S. mortgage company’s AI platform relates to housing and credit, making the Fair Housing Act (A), Fair Credit Reporting Act (B), and Equal Credit Opportunity Act (C) relevant. Title VII of the Civil Rights Act of 1964 deals with employment discrimination and is not directly relevant to the mortgage application context (D).
An EU bank intends to launch a multi-modal Al platform for customer engagement and automated decision-making assist with the opening of bank accounts. The platform has been subject to thorough risk assessments and testing, where it proves to be effective in not discriminating against any individual on the basis of a protected class.
What additional obligations must the bank fulfill prior to deployment?
- A . The bank must obtain explicit consent from users under the privacy Directive.
- B . The bank must disclose how the Al system works under the Ell Digital Services Act.
- C . The bank must subject the Al system an adequacy decision and publish its appropriate safeguards.
- D . The bank must disclose the use of the Al system and implement suitable measures for users to contest automated decision-making.
D
Explanation:
Under the EU regulations, particularly the GDPR, banks using AI for decision-making must inform users about the use of AI and provide mechanisms for users to contest decisions. This is part of ensuring transparency and accountability in automated processing. Explicit consent under the privacy directive (A) and disclosing under the Digital Services Act (B) are not specifically required in this context. An adequacy decision is related to data transfers outside the EU (C).
According to the GDPR, what is an effective control to prevent a determination based solely on automated decision-making?
- A . Provide a just-in-time notice about the automated decision-making logic.
- B . Define suitable measures to safeguard personal data.
- C . Provide a right to review automated decision.
- D . Establish a human-in-the-loop procedure.
D
Explanation:
The GDPR requires that individuals have the right to not be subject to decisions based solely on automated processing, including profiling, unless specific exceptions apply. One effective control is to establish a human-in-the-loop procedure (D), ensuring human oversight and the ability to contest decisions. This goes beyond just-in-time notices (A), data safeguarding (B), or review rights (C), providing a more robust mechanism to protect individuals’ rights.
According to the GDPR, an individual has the right to have a human confirm or replace an automated decision unless that automated decision?
- A . Is authorized with the data subject s explicit consent.
- B . Is authorized by applicable Ell law and includes suitable safeguards.
- C . Is deemed to solely benefit the individual and includes documented legitimate interests.
- D . Is necessary for entering into or performing under a contract between the data subject and data controller.
A
Explanation:
According to the GDPR, individuals have the right to not be subject to a decision based solely on automated processing, including profiling, which produces legal effects or similarly significantly affects them. However, there are exceptions to this right, one of which is when the decision is based on the data subject’s explicit consent. This means that if an individual explicitly consents to the automated decision-making process, there is no requirement for human intervention to confirm or replace the decision. This exception ensures that individuals can have control over automated decisions that affect them, provided they have given clear and informed consent.
According to the GDPR’s transparency principle, when an Al system processes personal data in automated decision-making, controllers are required to provide data subjects specific information on?
- A . The existence of automated decision-making and meaningful information on its logic and consequences.
- B . The personal data used during processing, including inferences drawn by the Al system about the data.
- C . The data protection impact assessments carried out on the Al system and legal bases for processing.
- D . The contact details of the data protection officer and the data protection national authority.
A
Explanation:
The GDPR’s transparency principle requires that when personal data is processed for automated decision-making, including profiling, data subjects must be informed about the existence of such automated decision-making. Additionally, they must be provided with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for them. This requirement ensures that data subjects are fully aware of how their personal data is being used and the potential impacts, thereby promoting transparency and trust in the processing activities.
According to the GDPR’s transparency principle, when an Al system processes personal data in automated decision-making, controllers are required to provide data subjects specific information on?
- A . The existence of automated decision-making and meaningful information on its logic and consequences.
- B . The personal data used during processing, including inferences drawn by the Al system about the data.
- C . The data protection impact assessments carried out on the Al system and legal bases for processing.
- D . The contact details of the data protection officer and the data protection national authority.
A
Explanation:
The GDPR’s transparency principle requires that when personal data is processed for automated decision-making, including profiling, data subjects must be informed about the existence of such automated decision-making. Additionally, they must be provided with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for them. This requirement ensures that data subjects are fully aware of how their personal data is being used and the potential impacts, thereby promoting transparency and trust in the processing activities.
According to the GDPR’s transparency principle, when an Al system processes personal data in automated decision-making, controllers are required to provide data subjects specific information on?
- A . The existence of automated decision-making and meaningful information on its logic and consequences.
- B . The personal data used during processing, including inferences drawn by the Al system about the data.
- C . The data protection impact assessments carried out on the Al system and legal bases for processing.
- D . The contact details of the data protection officer and the data protection national authority.
A
Explanation:
The GDPR’s transparency principle requires that when personal data is processed for automated decision-making, including profiling, data subjects must be informed about the existence of such automated decision-making. Additionally, they must be provided with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for them. This requirement ensures that data subjects are fully aware of how their personal data is being used and the potential impacts, thereby promoting transparency and trust in the processing activities.
According to the GDPR’s transparency principle, when an Al system processes personal data in automated decision-making, controllers are required to provide data subjects specific information on?
- A . The existence of automated decision-making and meaningful information on its logic and consequences.
- B . The personal data used during processing, including inferences drawn by the Al system about the data.
- C . The data protection impact assessments carried out on the Al system and legal bases for processing.
- D . The contact details of the data protection officer and the data protection national authority.
A
Explanation:
The GDPR’s transparency principle requires that when personal data is processed for automated decision-making, including profiling, data subjects must be informed about the existence of such automated decision-making. Additionally, they must be provided with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for them. This requirement ensures that data subjects are fully aware of how their personal data is being used and the potential impacts, thereby promoting transparency and trust in the processing activities.
According to the GDPR’s transparency principle, when an Al system processes personal data in automated decision-making, controllers are required to provide data subjects specific information on?
- A . The existence of automated decision-making and meaningful information on its logic and consequences.
- B . The personal data used during processing, including inferences drawn by the Al system about the data.
- C . The data protection impact assessments carried out on the Al system and legal bases for processing.
- D . The contact details of the data protection officer and the data protection national authority.
A
Explanation:
The GDPR’s transparency principle requires that when personal data is processed for automated decision-making, including profiling, data subjects must be informed about the existence of such automated decision-making. Additionally, they must be provided with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for them. This requirement ensures that data subjects are fully aware of how their personal data is being used and the potential impacts, thereby promoting transparency and trust in the processing activities.
According to the GDPR’s transparency principle, when an Al system processes personal data in automated decision-making, controllers are required to provide data subjects specific information on?
- A . The existence of automated decision-making and meaningful information on its logic and consequences.
- B . The personal data used during processing, including inferences drawn by the Al system about the data.
- C . The data protection impact assessments carried out on the Al system and legal bases for processing.
- D . The contact details of the data protection officer and the data protection national authority.
A
Explanation:
The GDPR’s transparency principle requires that when personal data is processed for automated decision-making, including profiling, data subjects must be informed about the existence of such automated decision-making. Additionally, they must be provided with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for them. This requirement ensures that data subjects are fully aware of how their personal data is being used and the potential impacts, thereby promoting transparency and trust in the processing activities.
According to the GDPR’s transparency principle, when an Al system processes personal data in automated decision-making, controllers are required to provide data subjects specific information on?
- A . The existence of automated decision-making and meaningful information on its logic and consequences.
- B . The personal data used during processing, including inferences drawn by the Al system about the data.
- C . The data protection impact assessments carried out on the Al system and legal bases for processing.
- D . The contact details of the data protection officer and the data protection national authority.
A
Explanation:
The GDPR’s transparency principle requires that when personal data is processed for automated decision-making, including profiling, data subjects must be informed about the existence of such automated decision-making. Additionally, they must be provided with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for them. This requirement ensures that data subjects are fully aware of how their personal data is being used and the potential impacts, thereby promoting transparency and trust in the processing activities.
According to the GDPR’s transparency principle, when an Al system processes personal data in automated decision-making, controllers are required to provide data subjects specific information on?
- A . The existence of automated decision-making and meaningful information on its logic and consequences.
- B . The personal data used during processing, including inferences drawn by the Al system about the data.
- C . The data protection impact assessments carried out on the Al system and legal bases for processing.
- D . The contact details of the data protection officer and the data protection national authority.
A
Explanation:
The GDPR’s transparency principle requires that when personal data is processed for automated decision-making, including profiling, data subjects must be informed about the existence of such automated decision-making. Additionally, they must be provided with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for them. This requirement ensures that data subjects are fully aware of how their personal data is being used and the potential impacts, thereby promoting transparency and trust in the processing activities.
Gender
In addition, the app obtains a device’s IP address and location information while in use.
What GDPR privacy principles does this violate?
- A . Purpose Limitation and Data Minimization.
- B . Accountability and Lawfulness.
- C . Transparency and Accuracy.
- D . Integrity and Confidentiality.
A
Explanation:
The GDPR privacy principles that this scenario violates are Purpose Limitation and Data Minimization. Purpose Limitation requires that personal data be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. Data Minimization mandates that personal data collected should be adequate, relevant, and limited to what is necessary in relation to the purposes for which they are processed. In this case, collecting extensive personal information (e.g., IP address, location, gender) and potentially using it beyond the necessary scope for the app’s functionality could violate these principles by collecting more data than needed and possibly using it for purposes not originally intended.
What is the primary reason the EU is considering updates to its Product Liability Directive?
- A . To increase the minimum warranty level for defective goods.
- B . To define new liability exemptions for defective products.
- C . Address digital services and connected products.
- D . Address free and open-source software.
C
Explanation:
The primary reason the EU is considering updates to its Product Liability Directive is to address digital services and connected products. The current directive does not adequately cover the complexities and challenges posed by modern digital and connected technologies. By updating the directive, the EU aims to ensure that it remains relevant and effective in addressing the liabilities associated with these advanced products, ensuring consumer protection and fair market practices in the digital age.
CASE STUDY
Please use the following answer the next question:
XYZ Corp., a premier payroll services company that employs thousands of people globally, is embarking on a new hiring campaign and wants to implement policies and procedures to identify and retain the best talent. The new talent will help the company’s product team expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.
It has become time consuming and expensive for HR to review all resumes, and they are concerned that human reviewers might be susceptible to bias.
Address these concerns, the company is considering using a third-party Al tool to screen resumes and assist with hiring. They have been talking to several vendors about possibly obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable laws.
The organization has a large procurement team that is responsible for the contracting of technology solutions. One of the procurement team’s goals is to reduce costs, and it often prefers lower-cost solutions. Others within the company are responsible for integrating and deploying technology solutions into the organization’s operations in a responsible, cost-effective manner.
The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also questions how best to organize and train its existing personnel to use the Al hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to change.
Which other stakeholder groups should be involved in the selection and implementation of the Al hiring tool?
- A . Finance and Legal.
- B . Marketing and Compliance.
- C . Supply Chain and Marketing.
- D . Litigation and Product Development.
A
Explanation:
In the selection and implementation of the AI hiring tool, involving Finance and Legal is crucial. The Finance team is essential for assessing cost implications, budget considerations, and financial risks. The Legal team is necessary to ensure compliance with applicable laws and regulations, including those related to data privacy, employment, and anti-discrimination. Involving these stakeholders ensures a comprehensive evaluation of both the financial viability and legal compliance of the AI tool, mitigating potential risks and aligning with organizational objectives and regulatory requirements.