RichardsonRecruiter Since 2001
the smart solution for Richardson jobs

Data Engineer ll - Core Technology Infrastructure

Company: Bank of America
Location: Richardson
Posted on: January 12, 2022

Job Description:

Job Description:

Job Description:
Infrastructure Information Services is looking for top talent to design and build best-in-class Data Management and Integration Services capability over Infrastructure/ITSM data using Hadoop Architecture.
The Data Engineer will innovate and transform the systems integration landscape for the Technology Infrastructure organization, while following industry best practices and providing capability maturity in support of Enterprise Data Management standards. - The ideal candidate is an expert in Data Warehousing and Master Data Management Design and Development. - Candidate should have a strong understanding of data management concepts and applied DW/MDM development of DB-level routines and objects. Candidate should have experience in migrating traditional Relational Database Management System (RDBMS) to a Hadoop based architecture. In addition have hands on experience developing in many of the Apache Hadoop based tools. It involves hands-on development and support of integrations with multiple systems and ensuring accuracy and quality of data by implementing business and technical reconciliations. Candidate needs to be able to understand macro level requirements and convert them into actionable tasks to deliver a technically sound product. Candidate should be able to work in teams in a collaborative manner.
Major Duties:
Analyze current RDBMS Master Data Management platform including orchestrations, workflows, transformations and help designing a scalable platform based on Hadoop for structured and semi-structure big data.
Ability to reengineer traditional database system and stored procedures using Big Data services
Accomplished development experience using Spark and Spark SQL
Expert level skills for evaluating, developing and performance tuning existing HIVE and PySpark implementation.
Ability to manage multiple priorities
Required Job Skills:
10+ years of total IT experience
At least 5 years of experience developing for Data Warehousing, Data Marts, and/or Master Data Management
Deep experience on Hadoop including Python, Spark, HIVE, HBase and HDFS with an emphasis on performance tuning and architecture. E.g. partitioning, bucketing and parquet, flat file
Programming experience Python, PySpark, Spark SQL.
Exposure to Relational Database Management Systems using Oracle, DB2 or SQL Server
Possesses and demonstrates deep knowledge of the Hadoop Ecosystem
Experienced exposure to Hadoop ecosystem including but not limited to: HDFS, MapReduce, Spark, Sqoop, Oozie, Kafka, Hive
Object oriented programming concepts
Expert SQL skills
Experience in SDLC and best practices for development
Ability to work against mid-level design documentation, take it to a low-level design, and deliver a solution that meets the success criteria
Knowledge of packaging and promotion practices for maintaining code in development, test, and production
Desired Job Skills:
Experience with Jira & Bitbucket

Job Band:
H5

Shift:
1st shift (United States of America)

Hours Per Week:
40

Weekly Schedule:

Referral Bonus Amount:
0
--> Job Description:

Job Description:
Infrastructure Information Services is looking for top talent to design and build best-in-class Data Management and Integration Services capability over Infrastructure/ITSM data using Hadoop Architecture.
The Data Engineer will innovate and transform the systems integration landscape for the Technology Infrastructure organization, while following industry best practices and providing capability maturity in support of Enterprise Data Management standards. - The ideal candidate is an expert in Data Warehousing and Master Data Management Design and Development. - Candidate should have a strong understanding of data management concepts and applied DW/MDM development of DB-level routines and objects. Candidate should have experience in migrating traditional Relational Database Management System (RDBMS) to a Hadoop based architecture. In addition have hands on experience developing in many of the Apache Hadoop based tools. It involves hands-on development and support of integrations with multiple systems and ensuring accuracy and quality of data by implementing business and technical reconciliations. Candidate needs to be able to understand macro level requirements and convert them into actionable tasks to deliver a technically sound product. Candidate should be able to work in teams in a collaborative manner.
Major Duties:
Analyze current RDBMS Master Data Management platform including orchestrations, workflows, transformations and help designing a scalable platform based on Hadoop for structured and semi-structure big data.
Ability to reengineer traditional database system and stored procedures using Big Data services
Accomplished development experience using Spark and Spark SQL
Expert level skills for evaluating, developing and performance tuning existing HIVE and PySpark implementation.
Ability to manage multiple priorities
Required Job Skills:
10+ years of total IT experience
At least 5 years of experience developing for Data Warehousing, Data Marts, and/or Master Data Management
Deep experience on Hadoop including Python, Spark, HIVE, HBase and HDFS with an emphasis on performance tuning and architecture. E.g. partitioning, bucketing and parquet, flat file
Programming experience Python, PySpark, Spark SQL.
Exposure to Relational Database Management Systems using Oracle, DB2 or SQL Server
Possesses and demonstrates deep knowledge of the Hadoop Ecosystem
Experienced exposure to Hadoop ecosystem including but not limited to: HDFS, MapReduce, Spark, Sqoop, Oozie, Kafka, Hive
Object oriented programming concepts
Expert SQL skills
Experience in SDLC and best practices for development
Ability to work against mid-level design documentation, take it to a low-level design, and deliver a solution that meets the success criteria
Knowledge of packaging and promotion practices for maintaining code in development, test, and production
Desired Job Skills:
Experience with Jira & Bitbucket

Job Band:
H5

Shift:
1st shift (United States of America)

Hours Per Week:
40

Weekly Schedule:

Referral Bonus Amount:
0
Job Description: Job Description:
Infrastructure Information Services is looking for top talent to design and build best-in-class Data Management and Integration Services capability over Infrastructure/ITSM data using Hadoop Architecture.
The Data Engineer will innovate and transform the systems integration landscape for the Technology Infrastructure organization, while following industry best practices and providing capability maturity in support of Enterprise Data Management standards. - The ideal candidate is an expert in Data Warehousing and Master Data Management Design and Development. - Candidate should have a strong understanding of data management concepts and applied DW/MDM development of DB-level routines and objects. Candidate should have experience in migrating traditional Relational Database Management System (RDBMS) to a Hadoop based architecture. In addition have hands on experience developing in many of the Apache Hadoop based tools. It involves hands-on development and support of integrations with multiple systems and ensuring accuracy and quality of data by implementing business and technical reconciliations. Candidate needs to be able to understand macro level requirements and convert them into actionable tasks to deliver a technically sound product. Candidate should be able to work in teams in a collaborative manner.
Major Duties:
Analyze current RDBMS Master Data Management platform including orchestrations, workflows, transformations and help designing a scalable platform based on Hadoop for structured and semi-structure big data.
Ability to reengineer traditional database system and stored procedures using Big Data services
Accomplished development experience using Spark and Spark SQL
Expert level skills for evaluating, developing and performance tuning existing HIVE and PySpark implementation.
Ability to manage multiple priorities
Required Job Skills:
10+ years of total IT experience
At least 5 years of experience developing for Data Warehousing, Data Marts, and/or Master Data Management
Deep experience on Hadoop including Python, Spark, HIVE, HBase and HDFS with an emphasis on performance tuning and architecture. E.g. partitioning, bucketing and parquet, flat file
Programming experience Python, PySpark, Spark SQL.
Exposure to Relational Database Management Systems using Oracle, DB2 or SQL Server
Possesses and demonstrates deep knowledge of the Hadoop Ecosystem
Experienced exposure to Hadoop ecosystem including but not limited to: HDFS, MapReduce, Spark, Sqoop, Oozie, Kafka, Hive
Object oriented programming concepts
Expert SQL skills
Experience in SDLC and best practices for development
Ability to work against mid-level design documentation, take it to a low-level design, and deliver a solution that meets the success criteria
Knowledge of packaging and promotion practices for maintaining code in development, test, and production
Desired Job Skills:
Experience with Jira & Bitbucket

Shift:
1st shift (United States of America) Hours Per Week:
40

Keywords: Bank of America, Richardson , Data Engineer ll - Core Technology Infrastructure, IT / Software / Systems , Richardson, Texas

Click here to apply!

Didn't find what you're looking for? Search again!

I'm looking for
in category
within


Log In or Create An Account

Get the latest Texas jobs by following @recnetTX on Twitter!

Richardson RSS job feeds