Hybrid work from Kyiv, Sofia, Berlin, Bucharest, Valencia:
We are seeking a talented and experienced Data Engineer to work on innovative projects in the field of Generative AI (GEN AI) aimed at optimizing internal business processes and increasing efficiency for clients.
Our team implements solutions that deliver significant business value and help scale the company's capabilities.
The product we develop is an application designed to facilitate and automate internal cooperation processes, enhancing clients' productivity, efficiency, and data security.
Our teams consist of: Technical Delivery Manager, Business Systems Analyst, Application Architect, Developers (dedicated FE and BE), QA Engineers and Designers .
Requirements:
Responsibilities:
Develop and maintain Web applications using Python, with a focus on integrating AI/LLM;
Collaborate with cross-functional teams to define, design, and ship new features and enhancements;
Optimize applications for maximum performance, scalability, and maintainability.
Troubleshoot, debug, and resolve software defects and issues;
Keep up to date with the latest industry trends and technologies to ensure the software is current and competitive;
Provide technical guidance and support to other team members;
Write clean, maintainable, and well-documented code;
Participate in code reviews and contribute to improving code quality.
1+ years of proven experience with Databricks;
2+ years of experience with Python;
Experience with Spark, SQL, Delta Lake;
Cloud: Knowledge of Azure, AWS, or GCP;
Excellent problem-solving skills and attention to detail;
Knowledge of frameworks and libraries such as Pandas, NumPy etc.;
Familiarity with agile development methodologies, such as Scrum or Kanban.
Familiarity with Vector databases;
Understanding of machine learning concepts, NLP, and AI/LLM;
Experience working with other AI and NLP technologies, such as OpenAI's GPT family, Azure OpenAI or related;
Experience in building and deploying solutions using AI/LLM, including RAG;
Familiarity with NoSQL databases;
Strong verbal and written communication skills.
Nice to have:
Experience with Databricks ML Flow models;
Data Warehousing: Experience with platforms like Snowflake or Redshift;
Big Data: Familiarity with tools like Hadoop, Kafka;
Optimization: Ability to optimize performance in big data systems;
ETL: Expertise in building and optimizing data pipelines.
Paid training programs and English/Spanish language courses;
Medical insurance, sports program compensation and other benefits compensation program, which can be selected by each employee according to their personal preferences;
Comfortable working hours;
Awesome team events and a wide variety of knowledge sharing opportunities.
We are TRINETIX - a dynamic, rapidly growing technology organization with approximately 1000 representatives in Europe, the United States, Argentina, who bring their passion, skill & innovation to ensure our organization provides products that meet the needs of our partners, and clients.
We offer IT solutions to business enterprises of various sizes and industries using the latest cutting-edge technologies. We assist our clients and partners in improving work processes, making them more efficient, while keeping the essential objectives in focus. We serve and support various business entities, enterprises, and startups globally, to grow and stay competitive in the digital era. We achieve this incredibly important goal by efficiently implementing tech innovation, substantial professional expertise, and a solution-driven strategy.
To learn more about how we collect, process, and store your personal data, please review our Privacy Notice.
Your email won't be used for commercial purposes. Read our Privacy Policy.