Data should be written as text to hdfs

Data should be written as text to hdfsView AnswerAnswer: Solution: Step 1: Create directory mkdir /tmp/spooldir2 Step 2: Create flume configuration file, with below configuration for source, sink and channel and save it in flume8.conf. agent1 .sources = source1 agent1.sinks = sink1a sink1b agent1.channels = channel1a channel1b agent1.sources.source1.channels = channel1a...

December 3, 2020No CommentsREAD MORE +

Data should be written as text to hdfs

Data should be written as text to hdfsView AnswerAnswer: Solution: Step 1: Create directory mkdir /tmp/nrtcontent Step 2: Create flume configuration file, with below configuration for source, sink and channel and save it in flume6.conf. agent1 .sources = source1 agent1 .sinks = sink1 agent1.channels = channel1 agent1 .sources.source1.channels = channel1...

December 3, 2020No CommentsREAD MORE +

Now import only new inserted records and append to existring directory . which has been created in first step.

Now import only new inserted records and append to existring directory . which has been created in first step.View AnswerAnswer: Solution: Step 1: Clean already imported data. (In real exam, please make sure you dont delete data generated from previous exercise). hadoop fs -rm -R departments Step 2: Import data...

December 3, 2020No CommentsREAD MORE +

Also make sure your results fields are terminated by '|' and lines terminated by 'n

Also make sure your results fields are terminated by '|' and lines terminated by 'nView AnswerAnswer: Solutions: Step 1: Clean the hdfs file system, if they exists clean out. hadoop fs -rm -R departments hadoop fs -rm -R categories hadoop fs -rm -R products hadoop fs -rm -R orders hadoop...

December 3, 2020No CommentsREAD MORE +

Please import data in a non-existing table, means while importing create hive table named hadoopexam.departments_new

Please import data in a non-existing table, means while importing create hive table named hadoopexam.departments_newView AnswerAnswer: Solution: Step 1: Go to hive interface and create database. hive create database hadoopexam; Step 2. Use the database created in above step and then create table in it. use hadoopexam; show tables; Step...

December 3, 2020No CommentsREAD MORE +

Please import the departments table in a directory called departments_enclosedby and file should be able to process by downstream system.

Please import the departments table in a directory called departments_enclosedby and file should be able to process by downstream system.View AnswerAnswer: Solution: Step 1: Connect to mysql database. mysql --user=retail_dba -password=cloudera show databases; use retail_db; show tables; Insert record Insert into departments values(9999, '"Data Science"'); select" from departments; Step 2:...

December 3, 2020No CommentsREAD MORE +

Also make sure you use orderid columns for sqoop to use for boundary conditions.

Also make sure you use orderid columns for sqoop to use for boundary conditions.View AnswerAnswer: Solutions: Step 1: Clean the hdfs file system, if they exists clean out. hadoop fs -rm -R departments hadoop fs -rm -R categories hadoop fs -rm -R products hadoop fs -rm -R orders hadoop fs...

December 2, 2020No CommentsREAD MORE +

Problem Scenario 25: You have been given below comma separated employee information. That needs to be added in /home/cloudera/flumetest/in.txt file (to do tail source)

Problem Scenario 25: You have been given below comma separated employee information. That needs to be added in /home/cloudera/flumetest/in.txt file (to do tail source) sex, name, city 1, alok, mumbai 1, jatin, chennai 1, yogesh, kolkata 2, ragini, delhi 2, jyotsana, pune 1, valmiki, banglore Create a flume conf file...

December 2, 2020No CommentsREAD MORE +

Now export data from hive table departments_hive01 in departments_hive02. While exporting, please note following. wherever there is a empty string it should be loaded as a null value in mysql.

Now export data from hive table departments_hive01 in departments_hive02. While exporting, please note following. wherever there is a empty string it should be loaded as a null value in mysql. wherever there is -999 value for int field, it should be created as null value.View AnswerAnswer: Solution: Step 1: Create...

December 2, 2020No CommentsREAD MORE +

Write a hive query to read average salary of all employees.

Write a hive query to read average salary of all employees.View AnswerAnswer: Solution: Step 1: Create hive table forflumeemployee.' CREATE TABLE flumeemployee ( name string, salary int, sex string, age int ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ', '; Step 2: Create flume configuration file, with below configuration for...

December 2, 2020No CommentsREAD MORE +