职位描述:
BMW is focusing on the development of automated driving (AD) with great effort. In order to develop and test highly intelligent software for environmental perception, maneuver planning and acting, which finally enables the self-driving capability of an autonomous car, a huge amount of environmental sensor data needs to be collected, hosted and processed in an AD Development Platform (AD Data Center). The processing covers e.g. broad data analytics, reprocessing and KPI evaluation, simulation, virtual endurance run, etc.
The BMW AD team in China is fully integreated into the global network of highly educated experts of different domains.
In this job you will play a crucial role at the interfaces between BMW AD SW development and IT teams as well as Chinese internet tech-companies and high-tech AD suppliers. Big data, distributed data and RaaS(Reprocessing as a Service) (e.g. MapReduce, HDFS,Airflow,Jenkins,CI\CD etc.,) will be your core focus. Your prime target will be to accelerate the AD product development by analyse FTs requriments and designing big data architectures , tasking the development of related big data applications and managing their deployment to a massive off-site data center BMW just developed in cooperation with a big Chinese Tech-Player. In order to achieve this target you will be exposed to a complex project environment involving both BMW internal and external partners. You will be working with are Docker, Openshift, Kubernetes, Grafana, SPARK etc. switch global environment to *****.
Major Responsibilities:
-Design, deployment and improvement of distributed, big data systems targeting automated driving applications;
-Tasking the development and improvement of off-board backend big data applications and related communication protocols targeting automated driving use cases;
-Management and structuring of requirements provided by BMW's AD development teams regarding the big data AD platform;
-Steering of BMW China AD development teams and external cooperation partners (big Chinese Tech-Player) regarding data driven development;
-Review, monitor and report the status of own project;
-Support in the budget planning process for the organization unit;
-Research, recommend, design and develop our Big Data Architecture. Ensures system, technical and product architectures are aligned with China data platfrom strategy;
Qualifications:
-Master/Bachelor’s degree in Computer Science,Electrical/Electronic Engineering, Information Technology or another related field or Equivalent;
-Good communication skills, good language skills in English; German language skills optional and appreciated;
-Rich experience in big data processing Hadoop/Spark ecosystem applications like Hadoop, Hive, Spark and Kafka preferred;
-Solid programming skills, like Java, Scala or Python;
-Rich experience in docker and Kubernetes;
-Familiar with CI/CD tools, like Jenkins and Ansible;
-Substantial knowledge and experience in software development and engineering in (distributed big data) cloud/ compute/ data lake/datacenter systems (MAPR,Openshift,docker etc.,)
-Experience with scheduling tools is a plus, like Airflow and Oozie preferred;
-Experience with AD data analysis e.g. Lidar, Radar, image/video data and CAN bus data.
-Experience in building and maintaining data quality standards and processes;
-Strong background working with Linux/UNIX environments;
-Strong Shell/Perl/Ruby/Python scripting experience;
-Experience with other NoSQL databases is a plus, like Elasticsearch and HBase preferred;
-Solid communication skills with ability to communicate sophisticated technical concepts and align on decisions with global and China ***** partners;
-Passion to stay on top of the latest happenings in the tech world and an attitude to discuss and bring those into play;
-Must have strong hands-on experience in data warehouse ETL design and development, be able to build scalable and complex ETL pipelines for different source data with different format;