Entry Level Data Engineer

Fiable Consulting
$30 - $40 / Hour 
Fulltime, Contract
Junior, Senior level

Job Description

Please read the job description before applying to the position.

Experience Level - 0 to 2 Years
Even Freshers can apply for this Job Role. (OPT Candidates also can apply)

Kindly Fill the Google Form, by clicking the below Link - ONLY CANDIDATES WHO HAVE FILLED THE GOOGLE FORM WILL BE CONTACTED

About Us:

Fiable Consulting Inc. is a visionary US-based IT Staffing company with over 10 years of experience in the field. We specialize in providing job opportunities exclusively on W2 positions with our extensive network of direct clients, including 500+ Fortune companies. Our focus is on assisting OPT candidates, H1B visa holders, H2, L2, Green Card holders, US Citizens, and even freshers in securing rewarding IT positions. We leverage our expertise and dedicated bench sales recruitment team to market candidate profiles effectively, ensuring successful job placements typically within 4-6 weeks. Fiable also offers direct hire (permanent placement) services, assisting clients in filling permanent positions across various technology domains.

Job Categories:

  • Full Stack Technologies
  • Data Science Domain
  • Cloud Technologies
  • Devops Technologies
  • Business/Quality Analysis

Candidates interested in mentioned domains can apply.


  • 100% Guaranteed successful placement.
  • No upfront fees, security deposits, or commissions. Our goal is to secure job placements with our direct clients.
  • Salary on our payroll will be according to market standards.
  • H1B sponsorship provided by Fiable Consulting Inc.
  • No bonds involved between candidates and Fiable Consulting Inc.
  • Health insurance benefits provided.
  • H1B Visa sponsorship and training and back-end support.
  • STEM extension is done as we are E-verified.
  • Green card sponsorship for Qualified candidates.

For more information Candidates can apply to the position and they will be contacted soon.

Job Description:

As a Data Engineer , you will be responsible for designing, developing, and maintaining robust data pipelines and infrastructure. You will work closely with cross-functional teams to gather requirements, optimize data flow, and ensure data availability, reliability, and accuracy. Your expertise in Big Data tools, Scala, Spark, and related technologies will be pivotal in shaping our data architecture and driving actionable insights from our vast datasets.

Required Skills

  • Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree is a plus.
  • Minimum of 0 to 2 years of proven experience as a Data Engineer, with a strong focus on Big Data technologies.
  • Proficiency in Scala programming language.
  • Hands-on experience with Apache Spark for large-scale data processing and analytics.
  • In-depth knowledge of ETL processes and data integration techniques.
  • Familiarity with distributed data storage and processing systems such as Hadoop, Hive, and HDFS.
  • Experience with data modeling, schema design, and data warehousing concepts.
  • Strong SQL skills and understanding of database systems (e.g., MySQL, PostgreSQL).
  • Experience with version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) processes.
  • Excellent problem-solving skills and a proactive attitude towards troubleshooting and issue resolution.
  • Ability to work effectively in a collaborative team environment and communicate technical concepts to non-technical stakeholders.
  • Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes) is a plus.
  • Strong attention to detail and a passion for delivering high-quality, reliable data solutions.


  • Collaborate with data scientists, analysts, and stakeholders to understand data requirements and translate them into scalable data solutions.
  • Design, develop, and deploy data pipelines for efficient data extraction, transformation, and loading (ETL) processes.
  • Optimize and maintain existing data pipelines to ensure data quality, reliability, and performance.
  • Implement data partitioning, clustering, and indexing strategies to enhance query performance.
  • Monitor and troubleshoot data pipeline issues, ensuring timely resolution to minimize downtime and data loss.
  • Work with large-scale datasets in a distributed computing environment using tools such as Hadoop, Spark, and related technologies.
  • Explore and evaluate new technologies and techniques to improve data processing efficiency and effectiveness.
  • Collaborate with DevOps teams to automate deployment processes and ensure a seamless integration of data pipelines.
  • Ensure compliance with data security and privacy standards throughout the data lifecycle

Job Types: Full-time, Contract, Permanent


  • 401(k)
  • Dental insurance
  • Health insurance
  • Paid time off

Experience level:

  • 1 year
  • 2 years
  • 3 years
  • No experience needed
  • Under 1 year

Work Location: Remote/Onsite

Job Types: Full-time, Contract

Salary: $30.00 - $40.00 per hour


  • 401(k)
  • Dental insurance
  • Health insurance

Experience level:

  • 1 year
  • 2 years
  • No experience needed
  • Under 1 year

Work Location: Remote

Ref #
30+ days ago
Last updated 29 days ago

Stay Inspired!
Join other developers and designers who have already signed up for our mailing list.
Terms     Privacy     Cookies       Do Not Sell       Licensing      
Made with    in Austin, Texas.  - vsn 44.0.0
© Data & Object Factory, LLC.