Living Our ValuesAll associates are guided by Our Values. Our Values are the unifying foundation of our companies. We strive to ensure that every decision we make and every action we take demonstrates Our Values. We believe that putting Our Values into practice creates lasting benefits for all of our associates, shareholders, and the communities in which we live.Why Join UsCareer Growth: Advance your career with opportunities for leadership and personal development.Culture of Excellence: Be part of a supportive team that values your input and encourages innovation.Competitive Benefits: Enjoy a comprehensive benefits package that looks after both your professional and personal needs.Total RewardsOur Total Rewards package underscores our commitment to recognizing your contributions. We offer a competitive and fair compensation structure that includes base pay and performance-based rewards. Compensation is based on skill set, experience, qualifications, and job-related requirements. Our comprehensive benefits package includes medical, dental, and vision insurance, wellness programs, retirement plans, and generous paid leave. Discover more about what we offer by visiting our Benefits page.A Day In The LifeThe Principal Data Engineer within the Data Science and Analytics team, plays a crucial role in architecting, implementing, and managing robust, scalable data platforms. This position demands a blend of cloud data engineering, systems engineering, data integration, and machine learning systems knowledge to enhance GST's data capabilities, supporting advanced analytics, machine learning projects, and real-time data processing needs. You will guide other team members and collaborate closely with cross-functional teams to design and implement modern data solutions that enable data-driven decision-making across the organization.As a Principal Data Engineer you will:Collaborate with Business, and IT functional experts to gather requirements or issues, perform gap analysis and recommend/implement process and/or technology improvements to optimize data solutions.Design data solutions on Databricks including Delta Lake, Data Warehouse, Data Mart and others to support the data science and analytical needs of the organization.Design and implement scalable and reliable data pipelines to ingest, process, and store diverse data at scale, using technologies such as Databricks, Apache Spark, Kafka, Flink, AWS Glue or other AWS services.Work within cloud environments like AWS to leverage services including but not limited to EC2, RDS, S3, Athena, Glue, Lambda, EMR, Kinesis, and SQS for efficient data handling and processing.Develop and optimize data models and storage solutions (SQL, NoSQL, Key-Value DBs, Data Lakes) to support operational and analytical applications, ensuring data quality and accessibility.Utilize ETL tools and frameworks (e.g., Apache Airflow, Talend) to automate data workflows, ensuring efficient data integration and timely availability of data for analytics.Implement pipelines with a high degree of automation for data workflows and deployment pipelines using tools like Apache Airflow, Terraform, and CI/CD frameworks.Collaborate closely with business analysts, data scientists, machine learning engineers, and optimization engineers, providing the data infrastructure and tools needed for complex analytical models, leveraging Python, scala or R for data processing scripts.Ensure compliance with data governance, compliance and security policies, implementing best practices in data encryption, masking, and access controls within a cloud environment.Establish best practices for code documentation, testing, and version control, ensuring consistent and reproductive data engineering practices across the team.Monitor and troubleshoot data pipelines and databases for performance issues, applying tuning techniques to optimize data access and throughput.Ensure efficient usage of AWS and Databricks resources to minimize costs while maintaining high performance and scalability.Cross functional work understanding data landscape, developing proof of concepts, and demonstrating to stakeholders.Leads one or more data projects and support with internal and external resources. Coach and mentor junior data engineers.Stay abreast of emerging technologies and methodologies in data engineering, advocating for and implementing improvements to the data ecosystem.What We Need From YouBachelor's Degree Computer Science, Data Science, MIS, Engineering, Mathematics, Statistics or other quantitative discipline with 5-8 years of hands-on experience in data engineering, with a proven track record in designing and operating large-scale data pipelines and architectures ReqProven experience designing scalable, fault-tolerant data architecture and pipelines on Databricks delta lake, lakehouse, unity catalog, streaming, AWS, ETL/ELT development and data modeling, with a focus on performance optimization and maintainability RequiredDeep experience of platforms and services like Databricks, and AWS native data offerings RequiredSolid experience with big data technologies (Databricks, Apache Spark, Kafka) and AWS cloud services related to data processing and storage RequiredStrong hands-on experience with ETL/ELT pipeline development using AWS tools and Databricks Workflows RequiredStrong experience in AWS cloud services, with hands-on experience in integrating cloud storage and compute services with Databricks RequiredProficient in SQL and programming languages relevant to data engineering (Python, Java, Scala RequiredHands on RDBMS and data warehousing experience (data modeling, analysis, programming, stored procedures) RequiredGood understanding of system architecture and design patterns to design and develop applications using these principles RequiredProficiency with version control systems like Git and experience with CI/CD pipelines for automating data engineering deployments RequiredFamiliarity with machine learning model deployment and management practices is a plus PreferredExperience with SAP, BW, HANA, Tableau, or Power BI is a plus PreferredExperience with auto, manufacturing, or supply chain industries is a plus PreferredProject life-cycle leadership and support for requirement workshop, design, development, test cycles and production cutover, post-go live support, and environment strategy. Strong knowledge of agile methodologies RequiredStrong communication skills, capable of collaborating effectively across technical and non-technical teams in a fast-paced environment. RequiredAWS Certified Solution Architect PreferredDatabricks Certified Associate Developer for Apache Spark Preferredor other relevant certifications. PreferredPhysical and Environmental RequirementsThe physical requirements described here are representative of those that must be met by an associate to successfully perform the essential functions of the job. While performing the duties of the job, the associate is required on a daily basis to analyze and interpret data, communicate, and remain in a stationary position for a significant amount of the work day and frequently access, input, and retrieve information from the computer and other office productivity devices. The associate is regularly required to move about the office and around the corporate campus. The associate must frequently move up to 10 pounds and occasionally move up to 25 pounds.Travel Requirements20% The associate is occasionally required to travel to other sites, including out-of-state, where applicable, for business.Join UsThe Friedkin Group and its affiliates are committed to ensuring equal employment opportunities, including providing reasonable accommodations to individuals with disabilities. If you have a disability and would like to request an accommodation, please contact us at . We celebrate diversity and are committed to creating an inclusive environment for all associates.We are seeking candidates legally authorized to work in the United States, without Sponsorship.#HP125#LI-BM1