Whatare the three key patrons involved in supporting the successful progress and formation ofany Al-based application?
- A . Customer facing teams, executive team, and facilities team
- B . Marketing team, executive team, and data science team
- C . Customer facing teams, HR team, and data science team
- D . Customer facing teams, executive team, and data science team
D
Explanation:
Customer Facing Teams: These teams are critical in understanding and defining the requirements of the AI-based application from the end-user perspective. They gather insights on customer needs, pain points, and desired outcomes, which are essential for designing a user-centric AI solution.
What is the primary purpose oi inferencing in the lifecycle of a Large Language Model (LLM)?
- A . To customize the model for a specific task by feeding it task-specific content
- B . To feed the model a large volume of data from a wide variety of subjects
- C . To use the model in a production, research, or test environment
- D . To randomize all the statistical weights of the neural networks
C
Explanation:
Inferencing in the lifecycle of a Large Language Model (LLM) refers to using the model in practical applications. Here’s an in-depth explanation:
Inferencing: This is the phase where the trained model is deployed to make predictions or generate outputs based on new input data. It is essentially the model’s application stage.
Production Use: In production, inferencing involves using the model in live applications, such as chatbots or recommendation systems, where it interacts with real users.
Research and Testing: During research and testing, inferencing is used to evaluate the model’s performance, validate its accuracy, and identify areas for improvement.
References:
LeCun, Y., Bengio, Y., & Hinton, G. (2015).Deep Learning. Nature, 521(7553), 436-444.
Chollet, F. (2017). Deep Learning with Python. Manning Publications.
What is the primary function of Large Language Models (LLMs) in the context of Natural Language Processing?
- A . LLMs receive input in human language and produce output in human language.
- B . LLMs are used to shrink the size of the neural network.
- C . LLMs are used to increase the size of the neural network.
- D . LLMs are used to parse image, audio, and video data.
A
Explanation:
The primary function of Large Language Models (LLMs) in Natural Language Processing (NLP) is to process and generate human language.
Here’s a detailed explanation:
Function of LLMs: LLMs are designed to understand, interpret, and generate human language text.
They can perform tasks such as translation, summarization, and conversation.
Input and Output: LLMs take input in the form of text and produce output in text, making them versatile tools for a wide range of language-based applications.
Applications: These models are used in chatbots, virtual assistants, translation services, and more, demonstrating their ability to handle natural language efficiently.
References:
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv: 1810.04805.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems.
What is one of the positive stereotypes people have about Al?
- A . Al is unbiased.
- B . Al is suitable only in manufacturing sectors.
- C . Al can leave humans behind.
- D . Al can help businesses complete tasks around the clock 24/7.
D
Explanation:
24/7 Availability: AI systems can operate continuously without the need for breaks, which enhances productivity and efficiency. This is particularly beneficial for customer service, where AI chatbots can handle inquiries at any time.
What are the enablers that contribute towards the growth of artificial intelligence and its related technologies?
- A . The introduction of 5G networks and the expansion of internet service provider coverage
- B . The development of blockchain technology and quantum computing
- C . The abundance of data, lower cost high-performance compute, and improved algorithms
- D . The creation of the Internet and the widespread use of cloud computing
C
Explanation:
Several key enablers have contributed to the rapid growth of artificial intelligence (AI) and its related technologies.
Here’s a comprehensive breakdown:
Abundance of Data: The exponential increase in data from various sources (social media, IoT devices, etc.) provides the raw material needed for training complex AI models.
High-Performance Compute: Advances in hardware, such as GPUs and TPUs, have significantly lowered the cost and increased the availability of high-performance computing power required to train large AI models.
Improved Algorithms: Continuous innovations in algorithms and techniques (e.g., deep learning, reinforcement learning) have enhanced the capabilities and efficiency of AI systems.
References:
LeCun, Y., Bengio, Y., & Hinton, G. (2015).Deep Learning. Nature, 521(7553), 436-444.
Dean, J. (2020). AI and Compute. Google Research Blog.
What are the three broad steps in the lifecycle of Al for Large Language Models?
- A . Training, Customization, and Inferencing
- B . Preprocessing, Training, and Postprocessing
- C . Initialization, Training, and Deployment
- D . Data Collection, Model Building, and Evaluation
A
Explanation:
Training: The initial phase where the model learns from a large dataset. This involves feeding the model vast amounts of text data and using techniques like supervised or unsupervised learning to adjust the model’s parameters.
What is artificial intelligence?
- A . The study of computer science
- B . The study and design of intelligent agents
- C . The study of data analysis
- D . The study of human brain functions
B
Explanation:
Artificial intelligence (AI) is a broad field of computer science focused on creating systems capable of performing tasks that would normally require human intelligence. The correct answer is option B, which defines AI as "the study and design of intelligent agents."
Here’s a comprehensive breakdown:
Definition of AI: AI involves the creation of algorithms and systems that can perceive their environment, reason about it, and take actions to achieve specific goals.
Intelligent Agents: An intelligent agent is an entity that perceives its environment and takes actions to maximize its chances of success. This concept is central to AI and encompasses a wide range of systems, from simple rule-based programs to complex neural networks.
Applications: AI is applied in various domains, including natural language processing, computer vision, robotics, and more.
References:
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson.
Poole, D., Mackworth, A., & Goebel, R. (1998). Computational Intelligence: A Logical Approach. Oxford University Press.
What are the potential impacts of Al in business? (Select two)
- A . Limiting the use of data analytics
- B . Increasing the need for human intervention
- C . Reducing production and operating costs
- D . Improving operational efficiency and enhancing customer experiences
C D
Explanation:
Reducing Costs: AI can automate repetitive and time-consuming tasks, leading to significant cost savings in production and operations. By optimizing resource allocation and minimizing errors, businesses can lower their operating expenses.
What is Transfer Learning in the context of Language Model (LLM) customization?
- A . It is where you can adjust prompts to shape the model’s output without modifying its underlying weights.
- B . It is a process where the model is additionally trained on something like human feedback.
- C . It is a type of model training that occurs when you take a base LLM that has been trained and then train it on a different task while using all its existing base weights.
- D . It is where purposefully malicious inputs are provided to the model to make the model more resistant to adversarial attacks.
C
Explanation:
Transfer learning is a technique in AI where a pre-trained model is adapted for a different but related task.
Here’s a detailed explanation:
Transfer Learning: This involves taking a base model that has been pre-trained on a large dataset and fine-tuning it on a smaller, task-specific dataset.
Base Weights: The existing base weights from the pre-trained model are reused and adjusted slightly to fit the new task, which makes the process more efficient than training a model from scratch.
Benefits: This approach leverages the knowledge the model has already acquired, reducing the amount of data and computational resources needed for training on the new task.
References:
Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., & Liu, C. (2018).A Survey on Deep Transfer Learning. In International Conference on Artificial Neural Networks.
Howard, J., & Ruder, S. (2018). Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
What is Artificial Narrow Intelligence (ANI)?
- A . Al systems that can perform any task autonomously
- B . Al systems that can process beyond human capabilities
- C . Al systems that can think and make decisions like humans
- D . Al systems that can perform a specific task autonomously
D
Explanation:
Artificial Narrow Intelligence (ANI) refers to AI systems that are designed to perform a specific task or a narrow set of tasks. The correct answer is option D.
Here’s a detailed explanation:
Definition of ANI: ANI, also known as weak AI, is specialized in one area. It can perform a particular function very well, such as facial recognition, language translation, or playing a game like chess.
Characteristics: Unlike general AI, ANI does not possess general cognitive abilities. It cannot perform tasks outside its specific domain without human intervention or retraining.
Examples: Siri, Alexa, and Google’s search algorithms are examples of ANI. These systems excel in their designated tasks but cannot transfer their learning to unrelated areas.
References:
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25.