Develop and maintain a scalable cloud infrastructure for the company's big data platform using AWS services like EC2, S3, RDS, etc.
Design, implement, and optimize database schemas to ensure efficient query performance while adhering to best practices in security and compliance standards such as GDPR or CCPA (if applicable).
Monitor system health through the use of tools like AWS CloudWatch and set up alerts for potential issues.
Implement data backup strategies, including automated backups with Amazon RDS snapshots to ensure business continuity in case of disaster recovery scenarios.
Collaborate closely with cross-functional teams such as the DevOps team, security analysts, and database administrators (DBAs) for seamless integration between systems while maintaining data integrity and confidentiality standards.
Troubleshoot performance issues by analyzing logs using tools like AWS CloudTrail or ElasticSearch to identify bottlenecks in the system's architecture and resolve them promptly with minimal downtime impact on end users.
Stay updated with emerging technologies, best practices, and industry trends related to big data platforms such as Apache Hadoop, Spark, Kafka, or Flink (if applicable).
Develop custom scripts using Python/Shell for automating repetitive tasks like system monitoring, backup management, log analysis, etc. that can be integrated into the existing infrastructure workflows without disrupting business operations.
Provide technical support to internal stakeholders and external clients by addressing their queries related to data platform performance issues or troubleshooting problems with our cloud services (if applicable).
Participate in code reviews, peer programming sessions, knowledge sharing workshops, etc., as part of the company's culture of continuous learning and skill development.