#R-00167496
Washington, DC or Indianapolis, IN area to support periodic on-site activities."
In this role, you will:
Design, build, and optimize scalable data solutions using Databricks and Medallion Architecture.
Manage ingestion routines for processing multi-terabyte datasets efficiently for multiple projects simultaneously, where each project may have multiple Databricks workspaces.
Integrate data from various structured and unstructured sources to enable high-quality business insights. Proficiency in data analysis techniques for deriving insights from large datasets.
Implement effective data management strategies to ensure data integrity, availability, and accessibility. Identify opportunities for cost optimization in data storage, processing, and analytics operations.
Montor and support user requests, addressing platform or performance issues, cluster stability, Spark optimization, and configuration management.
Collaborate with the team to enable advanced AI-driven analytics and data science workflows.
Integrate with various Azure services including Azure Functions, Storage Services, Data Factory, Log Analytics, and User Management for seamless data workflows. Experience with the above Azure services is a plus. Provision and manage infrastructure using Infrastructure-as-Code (IaC)
Apply best practices for data security, data governance, and compliance, ensuring support for federal regulations and public trust standards.
Proactively collaborate with technical and non-technical teams to gather requirements and translate business needs into data solutions.
For this position, you must possess:
BS degree in Computer Science or related field and 3+ years or Master's degree with 2+ years of experience
3+ years of experience developing and designing Ingestion flows (structured, streaming, and unstructured data) using cloud platform services with data quality
Databricks Data Engineer certification and 2+ years of experience maintaining Databricks platform and development in Spark
Ability to work directly with clients and act as front line support for requests coming in from clients. Clearly document and express the solution in form of architecture and interface diagrams.
Proficient at Python, Spark and R are essential. .NET based development is a plus.
Knowledge and experience with data governance, including metadata management, enterprise data catalog, design standards, data quality governance, and data security.
Experience with Agile process methodology, CI/CD automation, and cloud-based developments (Azure, AWS).
Not required, but additional education, certifications, and/or experience are a plus: Certifications in Azure cloud, Knowledge of FinOps principles and cost management
If you're looking for comfort, keep scrolling. At Leidos, we outthink, outbuild, and outpace the status quo - because the mission demands it. We're not hiring followers. We're recruiting the ones who disrupt, provoke, and refuse to fail. Step 10 is ancient history. We're already at step 30 - and moving faster than anyone else dares.
Original Posting:
September 25, 2025
For U.S. Positions: While subject to change based on business needs, Leidos reasonably anticipates that this job requisition will remain open for at least 3 days with an anticipated close date of no earlier than 3 days after the original posting date as listed above.
Pay Range:
Pay Range $85,150.00 - $153,925.00
The Leidos pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law.
#Remote
#Featuredjob