Data Bricks Architect ( Healthcare Payer, Databricks Big Data Exp.)
- Ventures Unlimited
- Canton, Massachusetts
- Full Time
Data Bricks Architect ( Healthcare Payer, Databricks Big Data Exp.)
Must Have Technical/Functional Skills
Healthcare Payer Experience | E3
Strong experience in Databricks- Primary Language-Scala
Strong Big Data Skills | E3
Strong experience of building data piplines using Azure Data Factory and Apache Spark (Azure Databricks)
Ability to analyze large datasets
Strong analytical skills
Customer Focus | E3
Roles & Responsibilities
Design and implement highly performant data ingestion pipelines from multiple sources using Apache Spark and Azure Databricks
Deliver and present proofs of concept to of key technology components to project stakeholders
Develop scalable and re-usable frameworks for ingesting of geospatial data sets
Integrate the end to end data pipleline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
Work with event based / streaming technologies to ingest and process data
Work with other members of the project team to support delivery of additional project components (API interfaces, Search)
Evaluate the performance and applicability of multiple tools against customer requirements
Work within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
Generic Managerial Skills, If any
Work in Onsite-Offshore Model
Team Player
Employers have access to artificial intelligence language tools (AI) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job
Dice Id:
10338945
Position Id:
DBATCS1H
Must Have Technical/Functional Skills
Healthcare Payer Experience | E3
Strong experience in Databricks- Primary Language-Scala
Strong Big Data Skills | E3
Strong experience of building data piplines using Azure Data Factory and Apache Spark (Azure Databricks)
Ability to analyze large datasets
Strong analytical skills
Customer Focus | E3
Roles & Responsibilities
Design and implement highly performant data ingestion pipelines from multiple sources using Apache Spark and Azure Databricks
Deliver and present proofs of concept to of key technology components to project stakeholders
Develop scalable and re-usable frameworks for ingesting of geospatial data sets
Integrate the end to end data pipleline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
Work with event based / streaming technologies to ingest and process data
Work with other members of the project team to support delivery of additional project components (API interfaces, Search)
Evaluate the performance and applicability of multiple tools against customer requirements
Work within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
Generic Managerial Skills, If any
Work in Onsite-Offshore Model
Team Player
Employers have access to artificial intelligence language tools (AI) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job
Dice Id:
10338945
Position Id:
DBATCS1H
Job ID: 479721704
Originally Posted on: 6/4/2025
Want to find more Construction opportunities?
Check out the 175,603 verified Construction jobs on iHireConstruction
Similar Jobs