1+ months

Big Data Architect

Chicago, IL

At Abbott, we're committed to helping people live their best possible life through the power of health. For more than 125 years, we've brought new products and technologies to the world -- in nutrition, diagnostics, medical devices and branded generic pharmaceuticals -- that create more possibilities for more people at all stages of life. Today, 99,000 of us are working to help people live not just longer, but better, in the more than 150 countries we serve.

Abbott has an immediate need for a Big Data Architect to join our team in Chicago Illinois working with the Big Data and Advanced Analytics team in Commercial, Digital and Innovation (CDI) group of Business Technology Service (BTS).
• Big Data Architect would player coach who will not only architect solutions, but will also be hands on in guiding the team and building the solution
• Servant Leadership – Able to garner respect from product team(s) and willingness to get hands dirty to get the job done
• Knowledgeable – Deep understanding and experience with big data ecosystem, open source projects, data value chain, and architecture patterns. – Experience with all listed is NOT required
• Communication – Strong verbal and written communication skills
• Facilitation – Able to lead architecture work sessions and articulate architecture components to data engineers, DevOps, architects and senior management
• Assertion – Able to ensure architecture concepts and principles are adhered to, must be able to be a voice of reason and authority, and make the tough calls
• Continual Improvement – Continually be growing in your craft, learning new tools and techniques to ensure architecture stays relevant, reliable and scalable
• Conflict Resolution – Able to facilitate tough discussions, and facilitate alternatives, or different approaches
• Transparent– Bring disclosure and visibility to the business about development progress and grow business trust
• Influencer – Embrace the role that will influence and impact transformation of online distribution of premium digital assets for Abbott


  • Architect and implement roadmaps and bring to life revolutionary new analytics and insights.
  • Provide guidance and platform selection advice for common Big Data (Distributed) platforms
  • Design data flow and processing pipelines for ingestion and analysis using modern toolsets such as Spark on Scala, Kafka, Flume, Sqoop, and others.
  • Develop and recommend novel and innovative -- yet proven and demonstrated -- approaches to solving business and technical problem using analytics solutions
  • Design data structures for ingestion and reporting, specific to use case and technology.
  • Provide data management expertise in evaluating requirements and developing data architecture and refining platform components and design. Data management includes appropriate structuring, stewardship of data, semantics/syntax of data attributes, coding structures, and mapping schemes. Design and develop code, scripts and data pipelines that leverage structured and unstructured data.
  • Guide and coach junior data engineers, DevOps engineers, share best practices, and perform code reviews
  • Collaborate with cross-functional teams to utilize the new big data tools
  • Manage architecture for data interchange using microservices, batch workloads and APIs

    Part of the centralized data office responsible for building and managing advanced analytics products/ solutions for Abbott


    • BA/BS degree or equivalent experience; Computer Science or Math background preferred
    • 12+ years of relevant technology architecture consulting or industry experience to include Information delivery, Analytics and Business Intelligence based on data from hybrid of Hadoop Distributed File System (HDFS), non-relational (NoSQL, RedShift) and relational Data Warehouses.
    • 3 or more years of hands on experience with data lake implementations, core modernization and data ingestion.
    • 3 or more years of hands-on working experience with Big data technologies; Hadoop, Scala, Python language and Big Data Frameworks, Kafka, Hive, HDFS, MapReduce, Yarn, Kafka, Pig, Oozie, HBase, Spark, and AWS software such as S3, and EMR.
    • Familiarity with commercial distributions of HDFS (Hortonworks, Cloudera, or MapR)
    • 2 or more years of hands on experience designing and implementing data ingestion techniques for real time and batch processes for video, voice, weblog, sensor, machine and social media data into Hadoop ecosystems and HDFS clusters.
    • Experience in architecting and engineering innovative data analysis solutions
    • Experience with architectural patterns for data-intensive solutions
    • Understand terminology involved in Data Science and familiarity with Machine Learning concepts
    • Customer facing skills to represent Big Data Organization well within Abbott environment and drive discussions with senior personnel regarding trade-offs, best practices, project management and risk mitigation
    • Demonstrated ability to think strategically about business, product, and technical challenges in an enterprise environment.
    • Current hands-on implementation experience required; individual contributors only need apply.
    • Strong verbal and written communications skills and ability to lead effectively across organizations.
    • The ability to provide strategic and architectural direction to address unique business problems.
    • System design and modeling skills (e.g. domain driven design, data modeling, API design
    • Experience with other NoSQL platforms, focused on what requirements drive technology choices.
    • Strong knowledge of standard methodologies, concepts, best practices, and procedures within a Big Data environment
    • A certain degree of creativity and latitude is required as well as flexibility in job role and working hours during critical deliveries
    • A natural sense of urgency, initiative and a positive team player philosophy to be reflected in daily work ethics.
    • Proficient understanding of distributed computing principles with the ability to architect and explain complex systems interactions including data flows, common interfaces, APIs and methods available.
    • Ability to work well with a cross-functional, geographically dispersed team and customer base.
    • Experience with SaaS/Cloud based offerings/products
    • Must have designed and built a scalable big data infrastructure that has been in use for several years
    • Experience designing architectures that have to work in a highly-regulated industry (healthcare or finance preferred).
    • Experience designing architectures that can incorporate data from multiple data sources
    • Experience educating other team members on a technology stack
    • Positive attitude, quick learner with strong desire to make big impact with innovative technical solutions
      Job Family: IT Services & Solutions Delivery
      Division: ADD Diagnostics
      Travel: Yes, 10 % of the Time
      Medical Surveillance: No
      Significant Work Activities: Continuous sitting for prolonged periods (more than 2 consecutive hours in an 8 hour day)


Before you go...

Our free job seeker tools include alerts for new jobs, saving your favorites, optimized job matching, and more! Just enter your email below.

Share this job:

Big Data Architect

Chicago, IL

Share this job

Big Data Architect

Chicago, IL

Separate email addresses with commas

Enter valid email address for sender.

Join us to start saving your Favorite Jobs!

Sign In Create Account
Powered ByCareerCast