Plano, TX, USA
18 days ago
Software Engineer

DESCRIPTION:

Duties: Design, develop and implement software solutions. Solve business problems through innovation and engineering practices. Serve as SME for core cash fraud business domain and drive initiatives’ planning, design and prioritization with product and user teams. Participate in all aspects of the Software Development Lifecycle (SDLC) including analyzing requirements, incorporating architectural standards into application design specifications, documenting application specifications, translating technical requirements into programmed application modules, and developing or enhancing software application modules. Represent the data engineering SDLC and guide the team on SDLC projects. Identify or troubleshoot application code-related issues. Take active role in code reviews to ensure solutions are aligned to pre-defined architectural specifications. Assist with design reviews by recommending ways to incorporate requirements into designs and information or data flows. Participate in project planning sessions with project managers, business analysts, and team members to analyze business requirements and outline proposed solutions. Mentor and guide new employees through technical and task planning.


QUALIFICATIONS:

Minimum education and experience required: Bachelor’s degree in Computer Engineering, Computer Science, Data Analytics, Data Engineering, or related field of study plus 7 years of experience in the job offered or as Software Engineer, Sr Software Engineer, Software Developer, Sr Software Developer, Sr Java Developer, Software Systems Engineer, Sr Member of Technical Staff, or related occupation. The employer will alternatively accept a Master’s degree in Computer Engineering, Computer Science, Data Analytics, Data Engineering, or related field of study plus 5 years of experience in the job offered or as Software Engineer, Sr Software Engineer, Software Developer, Sr Software Developer, Sr Java Developer, Software Systems Engineer, Sr Member of Technical Staff, or related occupation.

Skills Required: Requires experience in the following: developing Microservices that deploys as a REST API; JAVA and J2EE concepts such as Java concurrency, Hibernate, Caching, Graceful fails, JVM, memory management, and dependency injection; designing and implementing container-based APIs using Docker, including managing Docker images and implementing Docker Swarm for high availability, automatic disaster recovery, and zero-downtime updates; AWS Kubernetes Horizontal Pod Autoscaling; configuring and managing Kubernetes secrets and replica-set management in Kubernetes; Ingress, Egress, or Multi-tenancy; configuring Control plane, Ephemeral storage and volumes, ECS AWS Fargate, and network and application load balancing; analyzing Log groups, creating Log anomaly detection, and creating alarms using AWS CloudWatch logs; data structures, algorithms, and design patterns; integrating Microservices with CyberArk and Kerberos authentication; configuring composite pipelines, executing the Cl/CD Spinnaker pipelines, and automating rollback pipelines; utilizing Spring, Spring Boot, and Maven to manage application dependencies; data modeling, normalization, and performance tuning of Oracle, Cassandra and AWS Dynamo DB; de-dup, replay, multi consumer group onboarding, and tuning replication factors and partitions in Kafka; integrating Datadog Application Performance Monitoring for CPU, memory, and infra alerts; utilizing Dynatrace to configure, create, and analyze distributed traces; AWS infrastructure terraform deployments using plan, apply, destroy, and store the remote state functions; performance test automation framework development using JMeter or Blazemeter to achieve high TPS applications; creating custom dashboards and setting up alerts, charts, and email reports using Splunk; building front-end applications using ReactJS, JavaScript, Angular, HTML, CSS, Ajax, and Jquery; developing unit tests with Junit, Mockito, Sonar, Sonarlint, and Power Mockito; Application Resiliency and Security methodologies like ADFS, Oauth 2.0, or Ping Identity; ETL process including data cleansing, aggregation, enrichment, and data usability; building distributed processing applications using Apache Spark and Hadoop for processing high volumes of data at petabyte size; Data encoders such as Parquet, Avro, or JSON; utilizing YARN for resource management; building Spark User-defined functions (UDFs) and vectorized UDFs.

Job Location: 8181 Communications Pkwy, Plano, TX 75024. Telecommuting permitted up to 40% of the week.

Confirm your E-mail: Send Email