9 days old

Sr Data Engineer, ISC

Charlotte, NC 28202

Join a team recognized for leadership, innovation and diversity

The future is what you make it.

When you join Honeywell, you\rbecome a member of our global team of thinkers, innovators, dreamers and doers\rwho make the things that make the future. That means changing the way we fly,\rfueling jets in an eco-friendly way, keeping buildings smart and safe and even\rmaking it possible to breathe on Mars.
Working at Honeywell isn€™t just\rabout developing cool things. That€™s why all our employees enjoy access to\rdynamic career opportunities across different fields and industries.

Are you ready to help us make\rthe future?

Join a company that is transforming from a traditional industrial company to a contemporary digital industrial business, harnessing the power of cloud, big data, analytics, Internet of Things, and design thinking.

You will lead change that brings value to our customers, partners, and shareholders through the creation of innovative software and data-driven products and services. You will work with customers to identify their high value business questions and work through their data to search for answers. You will be responsible for working within Honeywell to identify opportunities for new growth and efficiency based on data analysis.


As a Sr. Data Engineer €“ ISC, you will be part of a team that delivers contemporary analytics solutions for the Global Integrated Supply Chain function at Honeywell. You will build strong relationships with leadership to effectively deliver contemporary data analytics solutions and contribute directly to business success. You will develop solutions on various Database systems viz. Hive, Hadoop, PostgreSQL, etc.

You will identify and implement process improvements €“ and you don€™t like to do the same thing twice so you will automate it if you can. You are always keeping an eye on scalability, optimization, and process. You have worked with Big Data before, IoT data, SQL, Azure, AWS.

You will work on a team including scrum masters, product owners, data architects, data engineers, data scientists and DevOps. You and your team collaborate to build products from the idea phase through launch and beyond. The software you write makes it to production in sprints. Your team will be working on creating a new platform using your experience of APIs, microservices, and platform development.

  • Bachelor's degree in Computer Science, Engineering, Applied Mathematics
  • 6 years of data engineering experience
  • 2 years in supply chain function €“ production, materials planning, logistics/distribution, procurement

    • Should have developed and deployed complex big data ingestion jobs in Talend/Informatica BDM bringing prototypes to production on Hadoop/NoSQL/MPP platforms.
    • Should have minimum 4 years of hands on experience with MapReduce, Pig/Hive, Spark, etc. and automation of data flow using NiFi and Airflow/Oozie.
    • Minimum 3 years of experience in developing and building applications to process very large amounts of data (structured and unstructured), including streaming real-time data (Spark, R/Python, Scala, Kafka, Spark streaming or other such tools).
    • Minimum 2 years of experience in working with at least one NoSQL system (HBase, Cassandra, MongoDB etc.). In-depth knowledge of schema design to effectively tackle the requirement.
    • Experience in writing complex SQL statements
    • Experience in working with cloud based deployments. Understanding of containers & container orchestration (Swarm or Kubernetes).
    • Hands on experience in Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure Data Lake Storage) based Hadoop distributions.
    • Good understanding of branching, build, deployment, CI/CD methodologies such as Octopus and Bamboo
    • Experience working with in Agile Methodologies and Scrum Knowledge of software best practices, like Test-Driven Development (TDD)
    • Effective communication skills and succinct articulation
    • Experience in building advanced analytics solutions with data from enterprise systems like ERPs, CRMs, Marketing tools etc.
    • Experience with dimensional modeling, data warehousing and data mining
    • Experience with machine learning solutions and data science methods promotion
    • Database performance management and API development
    • Technology upgrade oversight
    • Experience with visualization software (Tableau, Spotfire, Qlikview, Angular js, D3.js)
    • Understanding of best-in-class model and data configuration and development processes
    • Experience working with remote and global teams and cross team collaboration
    • Consistently makes timely decisions even in the face of complexity, balancing systematic analysis with decisiveness


      Additional Information
      • JOB ID: HRD82324
      • Category: Finance
      • Location: 300 S. Tryon St, Suite 500 / 600,Charlotte,North Carolina,28202,United States