Arcadia Healthcare Solutions

Software Engineer in Data Pipeline Integration

US-MA-Burlington

Overview

In this position you will work with a talented engineering group to design, install and test data pipelines which ingests and analyzes over a billion records every night. Leveraging technologies like Informatica, Apache Nifi, SQL Server, ElasticSearch and ELK you will implement data integration projects on cloud platforms such as AWS. The ideal candidate would be passionate about using technologies to solve complex problems in a distributed, non-homogeneous environment.

 

Top Reasons to Work with Us

  • Opportunity to work for an awesome software company that is growing
  • Opportunity to work with a highly scalable cloud platform
  • Opportunity to develop a highly disruptive platform that is going to change healthcare analytics

What’s In It for You

  • Opportunity to be part of a team creating a platform that is going to drastically improve healthcare analytics
  • Awesome work environment (teleworking opportunities considered too)
  • Competitive compensation
  • Great benefits like flextime time off
  • Stocked kitchen with snacks and beverages and more
  • Participate in Innovation Days, Datathons and Lunch & Learns

General Principals 

  • Motivated by bleeding edge technologies and methodologies
  • Track record in developing and delivering work into production
  • Strong Fundamentals in database queries and Functional Programming Best Practices
  • Versatile (Full Stack and Full Cycle)
  • Proven Ability to Adopt New Technologies
  • Craftsmanship
  • Committed, Disciplined, Self-Motivated, and Self Organizing
  • Find a way to move forward
  • Collaboration throughout software development life cycle
  • Contribute to the continuous improvement of our software development processes

Responsibilities

What You Will Be Doing

  • Work with teams and clients to extract healthcare data
  • Identifying common data transformation patterns and implementing reusable, scalable solutions
  • Executing data discovery and identifying value within new datasets which lead to new methods of extraction and/or tools
  • Working with massive datasets and using technology to transform these into valuable assets
  • Create queries in SQL, Spark SQL, or other languages to cleanse and transform incoming data into standard formats
  • Design and implement software components
  • Perform code reviews
  • Unit Testing
  • Integration Testing
  • Deploy software components
  • Groom Features (Epics Definition, Story Estimates, Tasks Breakdown)
  • Manage code repositories
  • Establish and enforce software versioning
  • Establish and maintain efficient local development environments
  • Provide, analyze, and respond to software development metrics such as Feature Lifecycle and Burn Down
  • Provide feedback and recommendations to improve software development processes

Qualifications

Required:

  • At least 3 - 5 years of related work experience
  • In depth experience with databases such as MySql, Postgres, MS SqlServer, or Oracle
  • Experience with Java, Python and/or Scala
  • Experience working with complex data sets
  • Experience with Business Intelligence software or advanced reporting queries/frameworks

Preferred:

  • Apache Nifi, Talend, IBM InfoSphere, TIBCO, Pentaho, or Informatica
  • ELK (ElasticSearch/Logstash/Kibana)
  • Distributed Hadoop-like technologies such as Spark, Storm and/or Kafka
  • Tableau, QlikView, Apache Zeppelin, IPython or Jupyter
  • Git, Jenkins, Travis
  • Healthcare data experience
  • HL7, CCD, CCLF file formats/designs

 

data, healthcare, MS SQL Server, Microsoft, Hadoop, spark, Extract, transform, load, ETL, informatica, talend, pentaho, open source, pipeline, integration, java, python, scala, apache

 

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed