Lenovo is a US$57 billion revenue global technology powerhouse, ranked #248 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY).
This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. To find out more visit www.lenovo.com, and read about the latest news via our StoryHub. Description and Requirements
Job responsibilities
This role will involve:
Maintaining and improving the existing system to meet complexed and rapidly changed business scenario. Creating workflow tools with a frontend user interface and backend database structure in business areas, like planning or procurement process. Building big data platform to meet specific business requirements using Hadoop or Spark. Review business requirement document and develop automation solutions by using Java / SQL and other similar programming languages.
Job Requirement:
Master or above Degree in Computer Science, Computer or Software Engineering, or related field. At least 3+ years of experiences Java software development, and the more years, the better. Good communication skills, good language skills in English. Strong Knowledge and skill of MySQL, Ser Server and Postgre SQL. Strong Knowledge and skill in major open-source framework, like SpringBoot and SpringCloud. Knowledge and hands-on experience with some of the latest data technologies and frameworks such as Hadoop, MapReduce, Hive, Spark, Flink, and Kafka. Also, other big data components, understand their characteristics and usage scenarios, and be able to select and develop appropriate big data components based on project requirements. Familiar with project lifecycles and software processes (both functional and non-functional requirement analysis, system design / architecture, implementation, configuration/build management, testing/integration, user acceptance testing, roll-out, maintenance). Familiar with Python, Script, Linux shell script, bash script. AI related skill or experience is preferred.
Additional Locations: * China - Tianjin - 天津(Tianjin) * China * China - Tianjin * China - Tianjin - 天津(Tianjin)