• 15k-30k 经验3-5年 / 本科
    内容资讯,短视频 / D轮及以上 / 2000人以上
    职位职责: 1、支持字节跳动研发团队,作为接口人满足业务团队日常招聘需求; 2、负责端到端招聘流程管理,深刻理解业务,把握招聘需求,为快速发展的团队搭建优秀的人才梯队; 3、参与/负责团队内高端人才寻访,以及面试,offer,入职跟进等全流程管理; 4、管理招聘渠道,如猎头/招聘网站等资源管理; 5、负责招聘相关项目,如校园招聘、人才mapping、高端人才招聘等; 6、分析招聘数据及漏斗,提升招聘有效性。 职位要求: 1、**本科及以上学历 ,5年以上招聘领域经验,甲乙方皆可; 2、良好的项目执行能力和推进能力; 3、优秀的沟通能力和解决问题能力,较高的人际敏感度和影响力; 4、热爱招聘,自我驱动力强; 5、有IT/互联网研发相关招聘经验者优先;有批量招聘经验者优先;有高端招聘经验者优先; 6、有行业头部研发人才**经验者优先; 7、有海外人才回流招聘经验者优先。
  • 8k-13k 经验1-3年 / 本科
    内容资讯,短视频 / D轮及以上 / 2000人以上
    职位职责: 1、承接招聘需求,了解业务背景和人才画像,寻访符合要求的人才简历; 2、跟进候选人的意向,确认基本信息了解,进行面试相关安排以及面试接待; 3、对人才有判断,为组织引进适配度较高人才,管理候选人的期望和整个招聘流程; 4、建立自己的渠道使用路径,能通过多维渠道定向寻访候选人,且建立持续性联系,形成良好的人才转介绍机制; 5、定期搜集与汇总市场人才信息,从人才市场角度对业务给予信息输入,为业务用人提供参考。 职位要求: 1、本科及以上学历,应届生或1-3年招聘相关工作经验; 2、沟通表达能力出色,能够比较顺畅对接业务方及候选人; 3、学习能力与自我驱动能力强,具有较强的更新迭代能力与成长诉求; 4、有较强的抗压能力和责任意识,面对复杂的招聘需求时,能够主动积极发现问题解决问题; 5、有高招工作经验优先考虑;有互联网招聘支持工作经验者优先考虑。
  • 25k-40k 经验10年以上 / 本科
    教育 / 不需要融资 / 500-2000人
    岗位职责: Job Title: Head (Data Intelligence Department) Department: Data Intelligence Department, HKUST(GZ) As a higher education institution committed to innovation and data-driven decision-making, HKUST(GZ) is establishing a new Data Intelligence Department aimed at harnessing the power of data to enhance operational efficiency and strategic growth across all levels of the organization. We are seeking a visionary leader to head this critical initiative. The ideal candidate will possess a blend of strategic vision, technical expertise, and leadership skills to build and nurture a high-performing team dedicated to data governance, data analytics and business intelligence. Duties 1. Drive cooperation among various departments to develop and implement a university-wide data governance strategy that aligns with the University's mission and strategic goals. 2. Establish the framework of data governance and develop long-term plans for the development of AI/HPC Data Center and Smart Campus, and promote the utilization of computing technologies and improvement of energy efficiency to achieve the vision of a "Sustainable Smart Campus". 3. Develop and promote the implementation of campus data governance policies, including the standards of the allocation and usage of computing resources and smart campus services, policies for the entire data lifecycle management as well as category-based and class-based data security and protection. 4. Lead the innovation in the management and services of the AI/HPC Data Center to meet the computational demands of research/teaching and future development of the society and enhance the university’s scientific research capabilities by improvement of operational efficiency and service capabilities. 5. Oversee the operation and intelligent upgrades of Smart Campus systems including multimedia and research/teaching facilities, office and life services, security access, and smart buildings, ensuring efficient operation and interaction of the systems, and upgrading the design of intelligent campus services. 6. Supervise the compliance of data to ensure it meets both domestic and international data protection laws and standards,ethical norms, and the university’s data confidentiality requirements. 7. Guide the data teams from various University's departments to establish and optimize data processing workflows, foster a culture of data-driven decision-making, promote data accumulation and then to form data assets and driving continuous institutional improvement through the strategic use of data across the University. 任职要求: A qualified candidate should: 1. Hold a bachelor's degree or above in Data Science, Computer Science, or a related field. 2. Have at least 10 years of relevant work experience, including data management and team leadership in large organizations. 3. Possess excellent communication and teamwork skills, able to collaborate across departments to promote the implementation of data policies. 4. Be familiar with domestic and international data security and compliance policies and related regulations, such as China's Data Security Law, China's Cybersecurity Law, etc., with experience in handling data compliance in multiple jurisdictions. 5. Have strong strategic planning, organizational management, innovation management, and change implementation capabilities. This is a Mainland appointment, and the appointee will be offered a contract by HKUST(GZ) entity in accordance with the Mainland labor laws and regulations. Starting salary will be commensurate with qualifications and experience.
  • 50k-58k 经验10年以上 / 本科
    营销服务|咨询 / B轮 / 50-150人
    Responsibilities -Own & deliver cross-team analytics epics end-to-end (often multi-quarter):scoping, design, implementation, rollout, and adoption, with minimal oversight -Set technical direction for our analytics/BI layer(Looker+dbt+Trino/Spark) and data products; lead design reviews and establish guardrails (cost, reliability, privacy, inclusion) -Model and govern data:design stable contracts(schemas/SLAs), manage lineage, and evolve domain models that unlock self-service and performance at scale -Optimize performance & cost across engines (Trino, Spark/Databricks):plan-level analysis, join/partitioning strategies, aggregation layers, caching/materialization; set SLOs with monitoring/alerting -Raise the bar on engineering quality:testing, CI/CD, documentation, privacy/security, on-call hygiene; lead incident reviews and drive permanent fixes -Mentor & multiply:coach engineers/analysts, delegate effectively, and contribute to recruiting while holding the bar Qualifications -Education: Bachelor's degree or higher in Computer Science or a related technical field, or equivalent practical experience. -Experience: 8–12+ years in data/analytics engineering or adjacent DE/BI roles, including 5+ years owning production semantic models & transformations and 3+ years leading cross-team initiatives end-to-end -SQL & performance:Expert SQL with the ability to read/act on query plans (distributed + warehouse). Proven wins on TB-scale data (e.g., ≥2× latency reduction or ≥30% cost savings) via partitioning, file formats, pruning, aggregations, and caching/materialization -dbt at scale:Operated mid-to-large dbt projects (≈100+ models), using incremental models, tests, exposures, macros/packages, CI/CD, and data contracts; strong documentation and naming standards -Looker semantic layer:Owned LookML modeling across multiple domains; shipped governed explores/measures for 100+ users, with version control, code review, release process, and change management that enable self-service analytics - Engines & storage:Hands-on with Trino/Presto and/or Spark/Databricks (distributed plans, join strategies, partitioning, autoscaling); comfortable with Parquet/Iceberg table layouts and query-aware modeling -Reliability & governance:You set SLOs for BI/analytics surfaces, establish monitoring/alerting, manage lineage & SLAs, and run post-incidents to land permanent fixes -Leadership:Self-directed;sets technical direction for a domain, drives multi-quarter epics, mentors multiple engineers/analysts, leads design reviews, and raises the hiring/promo bar -Software fundamentals:Proficient Python and data tooling; strong testing, CI/CD, code review hygiene; privacy/security awareness -AI/LLM enablement:Experience designing or integrating AI-assisted analytics (e.g., chat-to-SQL over a semantic layer, RAG on dbt/Looker docs) with guardrails for access control/PII and an evaluation plan; can quantify adoption or ticket reduction Nice to Have -Ad-tech domain expertise(RTB auction dynamics, mediation, attribution, and LTV) -Production ops for analytics infra:GitOps (Argo CD), IaC (Terraform), Kubernetes-based data services; incident playbooks for data/BI -Streaming & CDC: Kafka/Kinesis with Flink or Spark Structured Streaming to power near-real-time analytics -JVM stack:Scala/Java for Spark jobs/UDFs or high-throughput data services -Feature/ML data interfaces: feature marts or stores (e.g., Feast), batch/online syncing, model telemetry hooks Privacy & governance at scale: row/column-level security, tokenization, policy-as-code; familiarity with GDPR/CCPA impacts -Data observability&lineage tooling:Datadog, Prometheus/Grafana, OpenLineage/DataHub/Amundsen; automated freshness/volume/uniqueness checks -Experimentation:Experience building the foundations for A/B testing-event definitions, consistent metrics, and safeguards for valid results
  • 35k-45k·13薪 经验3-5年 / 博士
    数据服务,软件开发 / 不需要融资 / 150-500人
    Alvanon HK, Ltd. As the leading global apparel business and product development consultancy, we help our clients grow sales and improve profitability by aligning their internal teams and strategic suppliers, engaging consumers, and implementing world class innovations throughout their product development processes and supply chains. We sincerely invite analytical, energetic and self-motivated individuals to join Alvanon. We are looking for candidates for the following position: Machine Learning/Data Science Manager You will be working at our head office in Hong Kong, developing new products relating tobody fit and fashion technology using engineering and machine learning techniques. You will be leading the machine learning team, working closely with other stakeholders to create innovative solutions and products with the latest machine learning technologies. Responsibilities ● Lead and drive the team to deliver AI/ML solutions. ● Identify and communicate with key stakeholders and understand their requirements. ● Research and develop AI/ML solutions that have real impact on our business/products. ● Implement machine learning/deep learning models and bring them into production. ● Work with data scientists and software developers through the entire R&Dpipeline (problem analysis, idea generation, data collection, model prototyping, evaluation, implementation, deployment and maintenance). ● Maintain and improve in-house interface for application prototype, model visualization and evaluation. Requirements ● Doctor Degree in Computer Science/Information System/Statistics or any related quantitative disciplines ● A minimum of 5 years of working experience in the relevant field ● Understanding of common machine learning models/algorithms, such as SVM, logistic regression, Bayesian regression, Neural Networks, etc. ● Basic understanding of probability, statistics, optimization, data structure and algorithms ● Hands-on experience in machine learning/deep learning tools, such as PyTorch, TensorFlow, Keras ● Strong communication skills in English, Mandarin and Cantonese ● Research experience in AI/ML/CV/CG in companies or university research labs is a plus Personality ● Have effective communication skills ● Have sense of ownership of work and desire to deliver great products To apply, please
  • 25k-40k 经验5-10年 / 本科
    旅游|出行 / D轮及以上 / 500-2000人
    We are seeking a data professional with combined expertise in data analysis and data warehousing to join our dynamic team. The ideal candidate will focus on leveraging data analysis to drive overall data initiatives. This role will be responsible for designing and optimizing data models, handling complex datasets, and improving data quality and processing efficiency. We expect the candidate to work independently while also collaborating closely with the team as needed to drive data-driven business growth. Key Responsibilities: - Design and maintain data warehouse architecture to support various business lines, including but not limited to Attractions, Mobility, and Hotels. - Develop a deep understanding of each business line and use data analysis to support business decisions and strategy development. - Build and optimize data models to ensure data accuracy and reliability. - Independently handle and optimize complex datasets, enhancing data quality and processing workflows. - Collaborate closely with both business and technical teams to ensure data solutions meet business requirements. - Write technical documentation and maintain the data warehouse’s data dictionary. Qualifications: - Over 5 years of experience in the data warehouse field. - Proficient in SQL and database technologies, with hands-on experience managing large-scale databases. - Strong experience in data model construction, with the ability to independently design and optimize complex data models. - Extensive experience in data quality and underlying data processing, with the ability to effectively resolve data issues. - Familiarity with DBT and practical experience is preferred. - Strong analytical thinking and problem-solving skills, with the ability to complete projects independently. - Excellent communication and teamwork skills, capable of effectively interacting with team members from diverse backgrounds.
  • 20k-30k 经验1-3年 / 不限
    NCS
    IT技术服务|咨询,网络通信 / 不需要融资 / 2000人以上
    Has large scale system setup/deployment experience, Familiar with Shell/Python scripts.
  • 25k-40k·14薪 经验5-10年 / 不限
    营销服务|咨询,数据服务|咨询 / 上市公司 / 500-2000人
    岗位职责: 配合产品开发团队,保障FreeWheel数据平台的关键服务和重要基础设施的稳定、可靠地运行。 工作内容: 1. 深入理解业务,持续提升业务SLO/SLA; 2. 通过持续的全方位数据运营(包括可用性指标/历史事故/资源利用率等),找到系统容量、可用性、稳定性方面的薄弱点,并推进落地改进项目; 3. 参与建设运维工具.平台,推进运维自动化,量化数据,使用代码解决线上问题; 4. 参与故障应急响应处理,持续打磨监控系统,提升报警准确率,缩短故障定位时长; 5. 积累运维最佳实践,为业务及基础设施架构设计与资源选型提供指导,输出标准运维流程文档; 岗位要求: 1. 5年及以上相关工作经验,计算机科学或相关专业(通信/电子/信息/自动化等)优先; 2. 熟悉主流云厂商及服务,如AWS/GCP/Azure/AliCloud等; 3. 云环境管理与优化经验,包括成本管理,安全管理,运维管理,应用架构优化; 4. 熟悉业内流行的大数据或消息队列等分布式系统平台:Aerospike, Kafka, Hadoop,Yarn,HDFS,Hbase,Druid或其他NoSQL系统等; 5. 积极拥抱 “Infrastructure as Code”思想并有较丰富的实践经验,熟悉相关厂商及开源解决方案,如CloudFormation/Terraform等; 6. 运维平台设计与使用经验,如设计或参与开发过运维管理平台:资源管理,K8s管理,配置管理等; 7. 对多种云计算基础服务有较丰富的实践操作经验,包括但不限于: VPC, Subnets, Security Group, EC2, S3, IAM, Route 53, Security Hub etc; 8. 深入理解Linux操作系统,并掌握多种开源解决方案及相应技能:Kubernetes/Container/Nginx/Ansible/Prometheus/Grafana/ELK; 9. 熟悉Golang开发语言为优; 10. 工作积极主动,有强烈的责任心,执行能力强;善于思考总结,有很强的学习、问题分析和推进解决能力; 11. 基本的英文听说能力,较强的读写能力,能够快速融入英文工作环境;
  • 45k-65k 经验5-10年 / 本科
    其他 / 不需要融资 / 2000人以上
    Your role QIMA has a 35%/year growth pace, 20% thanks to acquisitions. It is paramount that we manage to integrate quickly our new acquired companies so that they can extend the benefit of our state-of-the-art data management & dashboards to our new clients and colleagues. Data integration plays a key role in this integration: how do we manage to understand quickly and unambiguously the data of the newly acquired company? How do we connect this data to our existing data flows? Data plays a key role at QIMA: as we master the entire data flow, from data collection (with our own inspectors, auditors, and labs), data processing (BI and data scientists) and data actionability (insights for our customers and for our managers). Data is spread around different departments and expertise inside the company: Marketing, Operations, Sales, IT, … Data governance is key, and collaboration around data will unlock the potential to bring even more value to our customers about the quality of their products, and to our managers about their operations. These main challenges about data lead us to look for our Head of Data. In this role, your main responsibilities will be, but not limited to: -Project Management oImagine the business cases of the data projects by exchanging with stakeholders, and deliver them oLead the transversal projects around our datawarehouse and cloud ETL oLead the Master Data Management projects, leveraging the key skills and technologies already in place across departments oDrive the data integration of newly acquired companies within QIMA group in order to synchronize reporting dashboards and provide a transversal understanding of the business oLead the discussion and integration projects with external partners oTrack results and provide continuous improvement oBe responsible for budget, roadmap, quality and delivery of these projects -People Management oManage the Data Engineering and the Business Intelligence teams -Community Animation oAnimate data governance across domains and departments oBe the guardian of data quality in the group, challenge data inconsistencies and ensure that data is shared by all departments in any circumstances oImplement knowledge sharing practices inside the data community oBe responsible of data lineage and data quality -Management of the run oCooperate with our IT support organization to create the support of the newly created systems oOrganize and manage the day-to-day operation and support of this system Requirements: In order to succeed in this role, you must have: -Master's degree in computer science -Extensive experience and knowledge on Data solution architecting -Experience in transversal projects -Are a hands-on person, autonomous, at ease to discuss with a CEO or a field operator -Open minded, agile with change and pragmatic -Ability to drive a workstream and train final users -Can work in a multinational environment and on multiple simultaneous projects -Strong communication skills, both oral and written -Excellent teamwork and interpersonal skills -Are fluent in English: daily use required with our colleagues all over the world; -If you are based in Europe, you are willing and able to travel.
  • 35k-50k 经验5-10年 / 本科
    其他 / 不需要融资 / 2000人以上
    Do you have experience in architecting data at a global organization scale? Do you enjoy working on cutting edge technologies while supporting end-users achieving their business goals? About QIMA You will be linking our new companies in Americas, our teams in Europe (mostly France) and our teams in Asia (China, Hong Kong, Philippines). Your role QIMA has a 35%/year growth pace, 20% thanks to acquisitions. It is paramount that we manage to integrate quickly our new acquired companies so that they can extend the benefit of our state-of-the-art data management & dashboards to our new clients and colleagues. Data integration plays a key role in this integration: how do we manage to understand quickly and unambiguously the data of the newly acquired company? How do we connect this data to our existing data flows? Data plays a key role at QIMA: as we master the entire data flow, from data collection (with our own inspectors, auditors, and labs), data processing (BI and data scientists) and data actionability (insights for our customers and for our managers). Data is spread around different departments and expertise inside the company: Marketing, Operations, Sales, IT, … Data governance is key, and collaboration around data will unlock the potential to bring even more value to our customers about the quality of their products, and to our managers about their operations. These main challenges about data lead us to look for our Head of Data. In this role, your main responsibilities will be, but not limited to: - Project Management o Imagine the business cases of the data projects by exchanging with stakeholders, and deliver them o Lead the transversal projects around our datawarehouse and cloud ETL o Lead the Master Data Management projects, leveraging the key skills and technologies already in place across departments o Drive the data integration of newly acquired companies within QIMA group in order to synchronize reporting dashboards and provide a transversal understanding of the business o Lead the discussion and integration projects with external partners o Track results and provide continuous improvement o Be responsible for budget, roadmap, quality and delivery of these projects - People Management o Manage the Data Engineering and the Business Intelligence teams - Community Animation o Animate data governance across domains and departments o Be the guardian of data quality in the group, challenge data inconsistencies and ensure that data is shared by all departments in any circumstances o Implement knowledge sharing practices inside the data community o Be responsible of data lineage and data quality - Management of the run o Cooperate with our IT support organization to create the support of the newly created systems o Organize and manage the day-to-day operation and support of this system Requirements: In order to succeed in this role, you must have: - Master's degree in computer science - Extensive experience and knowledge on Data solution architecting - Experience in transversal projects - Are a hands-on person, autonomous, at ease to discuss with a CEO or a field operator - Open minded, agile with change and pragmatic - Ability to drive a workstream and train final users - Can work in a multinational environment and on multiple simultaneous projects - Strong communication skills, both oral and written - Excellent teamwork and interpersonal skills - Are fluent in English: daily use required with our colleagues all over the world; - If you are based in Europe, you are willing and able to travel. We offer: a competitive package, performance bonus, fast career progression and international career opportunities.
  • 20k-35k 经验3-5年 / 本科
    IT技术服务|咨询,物联网 / 未融资 / 15-50人
    职位名称:数据科学家 城市:上海 福利 1.本地健康保险计划或保险津贴 2.灵活的travel安排 3.饮料、点心和水果 4.季节性饮料和生日庆祝活动 5.团队建设活动和郊游 6.上海户籍及上海工作居住证办理 7.多功能有设计感的办公环境,地铁零距离 工作摘要: 我们正在寻找一位高度积极和技能精湛的数据科学家加入我们的团队。成功的候选人将负责分析复杂的数据集,开发预测模型,存储数据,建立特定行业数据库,实施客户项目,并推动业务洞察和决策。作为数据科学家,您将与跨职能团队紧密合作,确定业务问题,设计和实施数据驱动的解决方案,并向关键利益相关者传达发现。 关键职责: 1.分析大型、复杂的数据集,识别趋势、模式和见解 2.使用统计学和机器学习技术开发预测模型 3.与跨职能团队合作,确定业务问题并设计数据驱动的解决方案 4.使用网络爬虫工具和技术爬取数据 5.清理和预处理爬取的数据,确保准确性和一致性 6.存储数据和建立不同行业的数据库 7.实施客户项目并在规定的时间内交付成果 8.向利益相关者清晰简洁地传达发现和建议 9.不断监控和评估模型性能,并根据需要进行改进 10.了解数据科学和机器学习领域的最新趋势和进展 要求: 1.计算机科学、统计学、数学或相关领域的学士或硕士学位 2. 2年以上数据科学家或类似职位的工作经验 3.熟练使用Python和SQL 4.深厚的统计和机器学习技术知识 5.具有使用BeautifulSoup或Scrapy等网络爬虫工具的经验 6.具有使用Tableau或PowerBI等数据可视化工具的经验 7.出色的分析、问题解决和沟通技能 8.能够独立工作和与团队合作 9.具有使用Hadoop、Spark或AWS等大数据技术的经验者优先考虑 10.流利的英文口语能力 如果您是一位充满激情和动力的数据科学家,具有强大的分析思维、市场数据爬取、数据存储和客户项目实施经验,并渴望在数据科学领域产生重大影响,我们鼓励您申请这个激动人心的机会。 我们的优势 1.与世界各地的国际团队合作 2.获得最新的网页技术和AR虚拟现实领域的前沿科技知识 3.学到更多的电子商务和移动相关的技术 4.获得用户界面和用户体验的工作经验 5.使用类似JIRA, Confluence, Hudson and Selenium项目管理系统进行工作 关于TMO集团 TMO Group是一家国际性的数字商务解决方案提供商,在阿姆斯特丹,上海, 成都和香港设有办事处, 电商 - 数据 - AI 我们为您的电商价值链提供集成解决方案,贯穿咨询、设计、开发和智能营销以及云计算赋能的托管服务。重点服务企业为B2C、D2C模式下的健康美妆行业,以及B2B数字化转型全行业。
  • 25k-40k 经验5-10年 / 本科
    消费生活 / 不需要融资 / 2000人以上
    工作职责 1.Perform an assessment on all visualization and reporting requirements and develop long term strategy for various dashboard & reporting solutions. 2.Effective communication with business partners to understand their needs, and design reports accordingly 3.Collect and understand business logic behind all the reports to translate it into data model design requirement 4.Manage projects, prepare updates and implement all phases for a project 5.Turn data into insights with actionable execution plans and influence key stakeholders to implement the solutions 6.Provide training to business teams on BI tool usage and dashboard creation 任职要求 1.Proficient in using SQL, Python and R for data manipulation and analysis 2.Experienced in data visualization and dashboard development with tools like Tableau 3.Excellent presentation, project management and people management skills 4.Bachelor degree or above in statistics, business analytics, mathematics, computer or relevant education with training on data and analytics 5.5+ years of experience of managing analytics&BI projects, with successful project experiences in using data to drive business value for senior analyst. 6.Experience in data governance, data quality management, data processing and insights 7.Strong in data visualization tool such as Tableau, PowerBI.etc), as well as in BI portal products 8.Excellent in planning and organization of project, able to use data to identify and solve problems 9.Experience in retail, CRM, supply chain and production is a plus
  • 25k-35k·14薪 经验5-10年 / 硕士
    IT技术服务|咨询 / 上市公司 / 2000人以上
    职责描述: ? Develop AI and machine learning solutions to optimise business performance across different areas of the organisation; ? Build exploratory analysis to identify the root causes of a particular business problem; ? Explore and analyze all the to understand customer behavior and provide better customer experience; ? Use data analytics to help in-country Marketing, Distribution, Operation, compliance and HR teams to increase revenue, lower costs and improve operational efficiency; ? Identify actionable insights, suggest recommendations, and influence the direction of the business of cross functional groups; ? Build MIS to track the performance of implemented models to validate the effectiveness of them and show the financial benefits to the organization; ? Support the data analytics team on data extraction and manipulation; ? Help decision analytics team on the creation of documentations of all analytical solutions developed; ? Build data infrastructure necessary for the development of predictive models and more sophisticated data analysis; ? Development of data analysis for different business units 任职要求: ? Studying a numerate discipline e.g. Actuarial science, Mathematics, Statistics, Engineering, or Computer Science with a strong computer programming component; ? Demonstrable understanding of data quality risks and ability to carry out necessary quality checks to validate results obtained; ? Knowledge of a variety of machine learning techniques (clustering, decision tree, random forest, artificial neural networks, etc.) ? Proven hands-on experience in the use of at least one advanced data analysis platform (e.g. Python, R); ? Sound knowledge of programming in SQL or other programming experience; ? Articulate, with excellent oral and written communication skills. Fast learner with a willing attitude. Intellectually rigorous, with strong analytical skills and a passion for data;
  • 软件服务|咨询,IT技术服务|咨询 / 不需要融资 / 50-150人
    外企银行 双休 不加班 实际有五个岗位: 4-9年:数据分析,deep learning, NLP, transformer model 3-6年:数据开发,Hadoop, Spark, Kafka, SQL, Flink 3-6年:数据开发,Python, SQL, pandas, database 2-6年:Machine Learning Engineer,Python, Docker, CI/CD, Machine Learning and/or ML model monitoring experience a plus 6-9年:Experience in HDFS, Map Reduce, Hive, Impala, Sqoop, Linux/Unix technologies. Spark is an added advantage. Job Duties & Responsibilities: • Support finance regulatory reporting projects & applications as ETL developer | Big Data applications by following Agile Software development life cycle | level 2/3 support of the data warehouse.
  • 25k-35k·13薪 经验5-10年 / 本科
    消费生活 / 不需要融资 / 2000人以上
    1. Who you are As a person you are motivated to continuously develop and enhance programming skills and knowledge of machine learning and AI applications. The IKEA Business and our values and how they apply to the data management process is your passion. Furthermore, you are energized by working both independently and interdependently with the architecture network and cross functions. Appreciate the mix of strategic thinking and turning architecture trends into practice. Last but not least you share and live the IKEA culture and values. In this role you have proven advanced training in (computer) engineering, computer science, econometrics, mathematics or equivalent. You have experience and knowledge of working with large datasets and distributed computing architectures. You have the knowledge of coding (R and/or Python) and development of artificial intelligence applications. You have experience in statistical analysis and statistical software. In addition, you are confident in data processing and analysis. You have demonstrable experience in working in an Agile or DevOps working set-up You have knowledge in following areas: • Knowledge of data set processes for data modelling, mining and production, as well as visualization tools e.g. Tableau • Knowledge of at least one building block of artificial intelligence • Knowledge of machine learning development languages (R and/or Python) and other developments in the fast moving data technology landscape e.g. Hive or Hadoop • Knowledge of DevOps and agile development practices • Exposure to development of industrialised analytical software products • Knowledge of IKEAs corporate identity, core values and vision of creating a better everyday life for the many people We believe that you are able to work with large amounts of raw data and feel comfortable with working with different programming languages. High intellectual curiosity with ability to develop new knowledge and skills and use new concepts, methods, digital systems and processes to improve performance. The ability to provide input to user-story prioritisation as appropriate based on new ideas, approaches, and strategies. The ability to understand the complexity of IKEA business and the role of data and information as an integrated part of the business. 2. Your responsibilities As Data Scientist you will perform predictive and prescriptive modelling and help develop and deploy artificial intelligence and machine learning algorithms to reinvent and grow the IKEA business in a fast changing digital world. You will: • Explore and examine data for use in predictive and prescriptive modelling to deliver insights to the business stakeholders and support them to make better decisions • Influence product and strategy decision making by helping to visualize complex data and enable change through knowledge of the business drivers that make the product successful • Support senior colleagues with modelling projects, identifying requirements and build tools that are statistically grounded, explainable and able to adapt to changing attributes • Support the development and deployment of different analytical building blocks that support the requirements of the capability area and customer needs • Use root cause research to identify process breakdowns and provide data through the use of various skill sets to find solutions to the breakdown • Work closely with other data scientists and across functions to help produce all required design specifications and ensure that data solutions work together and fulfill business needs • Work across initiatives within INGKA Group, steering towards data-driven solutions