AI Software Development Engineer in Test
Luxoft
João Pessoa, PB - há 1 hora
Descrição do trabalho
Project Description:

Open Video project is focused on OTT platform development for one of the largest North American TV providers.

We are seeking a highly skilled AI Engineer to join our Software Quality Engineering (SQE) team. The ideal candidate will have a dual focus: contributing to AI/ML solution development while performing responsibilities typical of a Software Development Engineer in Test (SDET). This role involves building and deploying AI-powered solutions for enhancing software testing, automation frameworks, and processes.

Responsibilities:

AI Engineering:

1. AI Solution Development:

1.1. Develop AI-powered solutions to improve software quality assurance processes.

1.2. Leverage frameworks like Retrieval-Augmented Generation (RAG) and other state-of-the-art AI/ML frameworks to design innovative testing and debugging tools.

1.3. Create models for triaging and debugging test case failures using structured and unstructured data, such as log files.

2. Framework Optimization:

2.1. Enhance testing processes with AI to automate root-cause analysis, bug detection, and recommendation systems.

2.2. Collaborate with cross-functional teams to integrate AI solutions into existing software testing pipelines.

3. MLOps Development:

3.1. Design, implement, and maintain robust MLOps pipelines for continuous integration and deployment of AI/ML models.

3.2. Automate workflows for model training, evaluation, and deployment.

SDET Responsibilities:

1. Test Case Creation & Maintenance:

1.1. Write, maintain, and optimize manual and automated test cases for functional, regression, and performance testing.

1.2. Collaborate with development teams to define test strategies and acceptance criteria for new features.

2. Automation Framework Development:

2.1. Build and enhance automation frameworks for Android, Web, and backend systems using tools like Appium, Selenium, or equivalent.

2.2. Integrate automation pre-checks, post-checks, and reporting systems to ensure test case reliability.

3. Code Review & Collaboration:

3.1. Review and provide feedback on code quality and design for testability.

3.2. Work closely with software engineers and other SDETs to drive testing best practices.

4. Test Data Management:

4.1. Work with engineering teams to ensure proper test data management, including synthetic data generation for testing AI-based solutions.

Mandatory Skills Description:

AI/ML Expertise:
  • Strong knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn).
  • Experience with MLOps tools (e.g., MLflow, Kubeflow, AWS Bedrock, SageMaker).
  • Proficiency in RAG frameworks and understanding of knowledge-based systems.
  • Familiarity with LangChain and Hugging Face Transformers. Experience working with Streamlit.
  • Strong command on all AWS services and prompt engineering techniques and Dspy.


SDET Expertise:
  • Proficiency in test automation tools (e.g., Appium, Selenium, JUnit, TestNG).
  • Strong programming skills in languages like Python, Java.
  • Experience with CI/CD tools like Jenkins, GitLab CI, or equivalent.


General Skills:
  • Familiarity with cloud platforms (e.g., AWS, GCP, Azure).
  • Knowledge of SQL and database management for test data validation.
  • Strong debugging and troubleshooting skills in software systems.


Languages:

English: B2 Upper Intermediate