• Minimum 1-year development experience in a big data project.
• Should have executed at least one end to end project using Hadoop, Hive, HDFS.
• Deep understanding of Map Reduce implementation patterns
• ETL experience in a Data warehousing project.
• Loading from disparate data sets.
• Pre-processing using Hive / Pig / python/ Unix scripting.
• Good skills and experience in Linux, Java, XML, JSON, REST.
• Understanding of database internals and data warehouse implementation techniques; working knowledge of SQL
• Knowledge of data structures and algorithms.
• Solid understanding of data structures & common algorithms Understanding of time-complexity of algorithms
• Should have worked in an agile development environment.
• Work with other team members to accomplish key development tasks: Team player, proactive.
Good to have:
• Implementation involving any of the following: PIG, Sqoop, Flume, Storm, Spark, zookeeper, Hbase, Chukwa or Scala.
• Design and develop ETL data flows using Hive / Pig / Unix scripts
9 freelancers are bidding on average ₹76759 for this job
I have good experience in ETL . I am working with ETL Tool informatica and i have my personal website on informatica [login to view URL] . I assure that i will complete your requirement in due time