
Hadoop Developer
Publicado en CazViden Anu Singh
Join Anu Singh as a Hadoop Developer in Addison, TX. Full-time role with career growth, health benefits, and relocation support. Apply today!
Salario
USD 90,000 - USD 130,000 por año
Ubicación
Addison, Texas, United States
Tipo de empleo
Tiempo completo
Modalidad
No especificado
Hadoop Developer
USD 90,000 - USD 130,000 por año
Descripción del empleo
At Anu Singh, a rapidly growing leader in data engineering solutions based in Addison, Texas, we're looking for a skilled Hadoop Developer to join our innovative team. This full-time, on-site role offers a unique opportunity to advance your career by working on cutting-edge big data technologies that drive strategic business decisions. If you have a passion for data processing and want to contribute to impactful projects in a collaborative environment, this position is for you. Located in the heart of Addison, TX, this role requires on-site presence, with relocation assistance available for qualified candidates. Join us and benefit from a culture that values innovation, continuous learning, and career development. Key Responsibilities Design and develop scalable ETL/ELT pipelines to efficiently process large datasets using Hadoop and related technologies. Build and optimize distributed data processing jobs leveraging Apache Spark, Databricks, and Hadoop ecosystems to enhance performance and reliability. Implement advanced data transformations using Python and SQL to support complex analytics and reporting requirements. Manage data ingestion workflows into relational and cloud-based platforms such as Teradata, Snowflake, Azure, AWS, or GCP, ensuring data integrity and availability. Collaborate with data architects and analysts to design robust data models, metadata standards, and data quality controls aligned with enterprise governance. Develop and maintain automated test suites, analyze test failures, and support test-driven development practices to ensure high-quality deliverables. Troubleshoot pipeline issues, identify performance bottlenecks, and implement solutions to maintain optimal system operations. Adhere to compliance, security policies, and operational risk frameworks to safeguard sensitive data and maintain regulatory standards. What We're Looking For Required: 5 to 8 years of professional experience in Data Engineering, Software Engineering, or related technical fields. Proficiency in Python and SQL for complex data transformation and querying. Hands-on experience with Hadoop ecosystem tools, Apache Spark, and Databricks. Strong knowledge of ETL/ELT pipeline design and distributed data processing. Experience with cloud data platforms such as AWS, Azure, or GCP, and relational databases like Teradata or Snowflake. Understanding of data modeling, metadata management, data lineage, and data quality frameworks. Familiarity with CI/CD pipelines, version control systems (Git), and automated deployment. Excellent problem-solving skills and ability to work collaboratively in a fast-paced environment. Preferred: Experience with enterprise data governance, compliance, and operational risk management. Knowledge of test-driven development and automated testing frameworks. Strong communication skills and a proactive approach to continuous learning and career growth. What We Offer Comprehensive health insurance plans to support your well-being. Paid time off to maintain a healthy work-life balance. Retirement plan options to secure your financial future. Relocation assistance for candidates moving to Addison, TX. Opportunities for professional development and career advancement within a growing company. Collaborative, innovative work environment that fosters creativity and impact. Frequently Asked Questions Is this position remote or on-site? This role is on-site in Addison, Texas, with relocation support available for qualified candidates. What level of experience is required? Candidates should have between 5 to 8 years of experience in data engineering or related fields. What technologies will I work with? You will primarily work with Hadoop, Apache Spark, Databricks, Python, SQL, and cloud platforms like AWS, Azure, or GCP. Are there opportunities for career growth? Yes, we prioritize career development and offer advancement opportunities within our expanding team. What is the application process? We conduct video interviews