Hyderabad, Telangana, India
3 days ago
Security Engineer II - AWS Data Engineer

You’re at the forefront of delivering secure software solutions. Join us a valued member of a top-performing team.

As a Security Engineer II at JPMorgan Chase within the Cybersecurity & Tech Controls team, you are part of an agile team that works to deliver software solutions that satisfy pre-defined functional and user requirements with the added dimension of preventing misuse, circumvention, and malicious behavior. As an emerging member of the security engineering team, you execute basic software solutions through the design, development, and troubleshooting of multiple components within a technical area, while gaining skills and experience to grow within your role.

Job responsibilities

Executes standard security solutions in accordance with existing playbooks to satisfy security requirements for internal clients (e.g., product, platform, and application owners) Writes secure and high-quality code using the syntax of at least one programming language with limited guidance Applies specialized tools (e.g., vulnerability scanner) to analyze and correlate incident data to identify, interpret, and summarize probability and impact of threats when determining specific vulnerabilities Supports delivery of continuity-related awareness, training, educational activities, and exercises Adds to team culture of diversity, equity, inclusion, and respect

Required qualifications, capabilities, and skills

Formal training and understanding in Security Engineering concepts and 2+ years of applied experience. Demonstrated expertise in agile methodologies, including CI/CD, application resiliency, and security practices. Hands on experience in building scalable data pipelines with Spark (SparkSQL) and Airflow or similar scheduling/orchestration tools for large data sets. Proficient in big data technologies such as Hadoop, Hive, HBase, Spark, and EMR. Skilled in working with MPP frameworks like Presto, Trino, and Impala. Experience with AWS big data services (Glue, EMR, Lake Formation, Redshift) or equivalent Apache projects (Spark, Flink, Hive, Kafka).

Preferred qualifications, capabilities, and skills

Familiar with building stream-processing systems, using solutions such as Storm or Spark-Streaming Understanding of Trino,DBT,ETL, SQL scripts, Python programming.
Confirm your E-mail: Send Email