Familiarity with CDH/CDP/HDP or similar Hadoop distributions, with proven experience in big data platform operations preferred;
Proficiency in Linux administration and scripting (Shell, Python, or equivalent);
Experience with major cloud platforms (AWS/GCP/Azure) and containerized environments (Kubernetes, Docker);
Strong knowledge of big data components and performance tuning, able to resolve stability and scalability issues independently;
Strong technical writing and communication skills to support documentation and solution delivery;
Self-motivated with a passion for new technologies, able to apply innovative approaches to optimize operations and drive efficiency.
Life @ Crypto.com
Responsibilities
Manage the daily operations of Hadoop, Spark, Flink, Clickhouse, and other big data platforms to ensure system stability and data security;
Operate and administer Linux systems, proficient in common commands, with scripting skills in Shell, Python, or similar languages;
Maintain and operate big data components on cloud platforms (AWS/GCP/Azure), with hands-on experience in container technologies such as Kubernetes and Docker;
Build and maintain monitoring systems to track platform performance, optimize configurations and resource allocation, and troubleshoot performance issues (tools such as Datadog, Prometheus, Grafana, Zabbix, etc.);
Deploy, upgrade, and scale core big data components including Flink, Kafka, Spark, Impala,Clickhouse and Kudu;
Write and maintain technical documentation for operations, and support in platform solution design and upgrades;
Explore and adopt new technologies to optimize operation processes and provide technical guidance to help client teams improve their capabilities.
Preferred: Experience on the development of n8n workflow