web analytics

Data Engineer – Remote


Emma Of Torre.Ai

Mon, 06 Oct 2025 22:55:35 GMT

OverviewBe among the first 25 applicants.
Get AI-powered advice on this job and more exclusive features.
I’m helping Nearsure find a top candidate to join their team full-time for the role of Data Engineer – Remote.You will architect and optimize scalable data platforms while guiding teams and ensuring reliability.Location: Remote: ArgentinaMission of Nearsure:»At Nearsure, our formula for success starts with our people power. Born in LATAM, we nurture a cross-border culture of innovation and a sharp eye for business.»
What makes you a strong candidateYou are an expert in SQL, Python, Data warehouse, Data modeling, Data engineering.You have +3 years experience in GCP (Google Cloud Platform), Apache Spark, AWS.English – Fully fluent
ResponsibilitiesArchitect and evolve scalable infrastructure to ingest, process, and serve large volumes of data efficiently.Lead improvements to existing frameworks and pipelines to ensure performance, reliability, and cost-efficiency.Establish and maintain robust data governance practices that empower cross-functional teams to access and trust data.Transform raw datasets into clean, usable formats for analytics, modeling, and reporting.Investigate and resolve complex data issues, ensuring data accuracy and system resilience.Maintain high standards for code quality, testing, and documentation, with a strong focus on reproducibility and observability.Stay current with industry trends and emerging technologies to continuously raise the bar on our engineering practices.
About the projectAs a Senior Data Engineer, you will be a strategic professional shaping the foundation of our data platform.
You will design and evolve scalable infrastructure, enable data governance at scale, and ensure our data assets are clean, reliable, and accessible.
You will be a go-to expert, mentoring other engineers and influencing architectural decisions across the company.
How you’ll contributeArchitect and evolve scalable infrastructure to ingest, process, and serve large volumes of data efficiently.Lead improvements to existing frameworks and pipelines to ensure performance, reliability, and cost-efficiency.Establish and maintain robust data governance practices that empower cross-functional teams to access and trust data.Transform raw datasets into clean, usable formats for analytics, modeling, and reporting.Investigate and resolve complex data issues, ensuring data accuracy and system resilience.Maintain high standards for code quality, testing, and documentation, with a strong focus on reproducibility and observability.Stay current with industry trends and emerging technologies to continuously raise the bar on our engineering practices.
QualificationsBachelor’s Degree in Computer Science, Engineering, or a related field.5+ Years of experience in data engineering: designing, building, and operating scalable data ingestion, processing, and serving layers.5+ Years of experience with SQL for analytics, transformations, and performance optimization.5+ Years of experience with Python for data manipulation and pipeline development (e.g., PySpark, pandas).5+ Years of experience with data modeling for Data Warehouses/Lakehouses and building ELT/ETL pipelines.3+ Years of experience with distributed data processing (Apache Spark) for batch and/or streaming at scale.3+ Years of experience with cloud platforms (AWS and/or GCP) for data engineering workloads.2+ Years of experience implementing data governance at scale across multiple domains/teams.2+ Years of experience improving data frameworks/pipelines for performance, reliability, and cost efficiency.1+ Years of experience with automated testing and CI/CD for data pipelines and observability.Experience with API-based integrations (JDBC/ODBC, REST, SOAP) and data ingestion patterns.Strong observability mindset with logging, auditing, and tracing for data platforms.Data security and privacy (PII handling, encryption, tokenization, IAM).
Automation of data pipelines and environments (schedulers, packaging, reproducibility).
Hands-on with diverse data types and formats (JSON, Avro, ORC, Parquet; columnar vs. row).
Proficiency with Git-based workflows.Understanding of modern data architectures: Data Lake, Data Warehouse, Lakehouse, Data Mesh, Data Fabric, Delta/transactional lakes.Advanced English Level is required for this role, as you will work with US clients.
What to expect from our hiring processLet’s chat about your experience!
Impress our recruiters, and you’ll move on to a technical interview with our top developers.Nail that, and you’ll meet our client – your final step to joining our amazing team!
By applying to this position, you authorize Nearsure to collect, store, transfer, and process your personal data in accordance with our Privacy Policy.
For more information, please review our Privacy Policy.
BenefitsWork in an international, dynamic and passionate environment with a great company culture.Excellent opportunities for professional growth!
Attractive salary offer and compensation in USD.Work from home, with flexible working schedules.Competitive salary in USD.Paid time off (annual leave, national holidays, sick time and parental leave).
Take part in challenging projects for distributed companies in the US.Integrate a growing team with great opportunities for professional growth and a friendly, multicultural environment with a great company culture.A tailor-made benefits package focused on health, wellbeing, entertainment, training and personal finances available for you to choose what you really care about.
#J-18808-Ljbffr

Más información sobre este trabajo

Ofertas Recientes