David Fernandez - Lead Engineer Resume Simple
10+ years of experience in IT industry with expertise in design, development, testing and maintenance of data warehousing applications using IBM DataStage, Oracle, SQL Server, DB2, Oracle, UNIX, Linux, Windows, UNIX, Windows, and web.
  • distribution, engineering, documenting, oracle, oracle 11g, toad, hdfs, real time, hive, program development, map reduce, visio, architecture, sqoop, best practices, writing, relational database, database systems, unix, flume, product management, hadoop, testing, ms visio, web, technology, data proc
  • data analysis, pig, oracle 11g, oracle, analysis, hbase, ibm, relational databases, monitoring, hdfs, hive, map reduce, databases, sqoop, writing, sql, pipeline, unix, flume, visio, visualization, reports, java, maintenance, oozie, distribution, toad, ms visio, bi, scheduling
  • 2017-12-252017-12-25

    Lead Engineer

    Trenton Psychiatric Hospital

    • Experience in writing Hadoop streaming jobs to process data from different sources like HDFS, Sqoop, and Kafka. Extracted data from Oracle, Teradata and Sybase databases.
    • Developing and implementing data pipelines using Hadoop and Spark technology. This includes writing and testing of Oracle real time analytical applications.
    • Worked on BigData technology, data modeling, and the development of the web services and also involved in writing the code for the ETL process.
    • Responsible for data management and database administration using Oracle, TOAD, and shell scripts. This includes coordination with business users and technical teams.
    • Developed analytical and data pipeline using Oracle and Maven technology for engineering and management. Used Sqoop for importing and exporting data from Oracle, and HDFS.
    • Provide leadership to the team on data modeling, data flow, database architecture, and change management using BMC Remedy.
  • 2017-12-252017-12-25

    Senior Hadoop Developer

    Tower Loan

    • Worked on analyzing and transforming data using Sqoop and Hive, and loaded the data into HDFS and Hive and analyzed the customer data using Flume.
    • Responsible for scheduling the Sqoop jobs to extract data from Oracle, and to load into HDFS and Hive for further analysis and reports.
    • Involved in data migration from Oracle to Teradata, using Sqoop and Hive and Impala for troubleshooting and visualization. Used Tableau to extract, transform and load data into the Oracle database.
    • Responsible for writing Hive queries for data analysis and extraction using Sqoop and Oracle and TOAD. Involved in the development of the Hadoop cluster for the Hadoop cluster.
    • Involved in the migration of Hadoop cluster to Hadoop ecosystem including Hive, HBase, Sqoop, Flume, and YARN.
    • Experience in migration of Informatica and Oracle databases from various sources like Teradata, Oracle, and MS access..

 Open Journal Systems 

 Kingsley Technologies