Earl Dunn - Developer Resume Simple
Overall 13 years of IT experience in the field of information technology with strong emphasis on design, development, testing and maintenance of data warehouse applications using IBM WebSphere MQ, WebSphere message Broker, MQ, TIBCO RV, EMS, JMS, JSP, Servlets, JDBC
  • operations, oracle 11g, oracle, hbase, shell scripting, maven, optimization, hive, map reduce, mapreduce, storm, architecture, sqoop, reporting, oozie, user experience, coordination, command line, shell, apache storm, ant, clients, spark, web, java, nosql, metadata, splunk, kafka, access, flume, web
  • test, engineering, oracle, workflow, api, ec2, map reduce, hdfs, hive, pig, mapreduce, teradata, sqoop, oozie, capacity planning, aws, data cleansing, manager, hadoop, writing, hbase, amazon ec2, java, metadata, data warehouse, analyst, cloud, systems engineering, access, planning, mysql, cloudera
  • 2017-12-252017-12-25


    Private Practice

    • Environment: Hadoop, HDFS, Hive, Pig, Sqoop, Oozie, Flume, Kafka, Cloudera, Big data, Hadoop, Scala, Python, shell scripting, UNIX, NoSQL, Spark.
    • Worked on Hadoop cluster and HDFS, map Reduce, HDFS, Hive, Pig, Sqoop, Oozie, and Storm, UNIX shell scripting, data pipeline, BigData, Hadoop, Spark.
    • Implemented monitoring and alerting of Hadoop clusters using Apache Kafka and HDFS and map reduce jobs to the Teradata databases.
    • Used Spark streaming to process the data from HDFS and used ActiveMQ for real time monitoring and reporting of the data.
    • Developed a map reduce job in Java and Scala to ingest data into HDFS and used Maven for building and monitoring of the jobs.
    • Used Spark streaming to perform data operations and debugging and monitoring of Hadoop jobs using Apache Kafka, HDFS, HBase, Zookeeper and Sqoop.
  • 2017-12-252017-12-25


    Labette Community College

    • Experience in Hadoop ecosystem components like HDFS, MapReduce, Hive, Sqoop, Flume, Oozie, HBase, Redshift, Cloudera Manager, Teradata, SQL, NoSQL, EMR, EC2, S3, data Science,
    • Used Scala and Spark for data analysis and visualization. Designed and developed MapReduce jobs to extract data from different sources into HDFS and then to Hadoop distributed file system.
    • Worked on NoSQL databases like HBase, Cassandra, RDBMS, Sqoop, Flume, Sqoop, data visualization, social media, and analytics.
    • Writing MapReduce jobs to perform data analysis on AWS cloud and EDW databases. Used Scala for data cleansing and processing.
    • Developed Hive queries to extract data from RDBMS to load into HDFS and Hive to validate the data and to create reports to the organization.
    • Created Python scripts to extract data from Java API to load into HDFS and developed stored procedures to process data in the EDW.

 Noble Ape 

It is all about creating a CV with Gelver.com. That might stop here, but what ensues is that the AI starts searching jobs for you that correspond to the type of skills that you have entered. You can filter those results by location and title.

 free resume samples