Application closing date: Tuesday, 05 July 2022 • 11:59pm, Canberra time (in Canberra)

Estimated start date: Monday, 18 July 2022

Location of work: ACT, NSW, NT, QLD, SA, TAS, VIC, Offsite

Length of contract: until 30 June 2023

Contract extensions: 2 x 12 months

Security clearance: Must have Baseline

Rates: $120 - $170 per hour (inc. super)

The National Skills Commission (NSC) provides expert advice and national leadership on Australia’s labour market and current, emerging and future workforce skills needs. We also have an important role in simplifying and strengthening Australia’s Vocational Education and Training (VET) system. 

NSC are seeking a Senior Data Engineer to drive the design and implementation of solutions for key data-centric projects and ongoing business conducted on our modern Data and Analytics Platform. The NSC’s work heavily relies on data, and data engineering is a critical part of our work. Our platform has been built with top tier tools and includes Azure Data Lake, Azure Data Factory, Databricks, Apache Spark, Aristotle, RStudio, Power BI and SAS.

The Senior Data Engineer role is suited to candidates with strong technical experience who are seeking to transition towards a management/consultant style career track. It will require engaging with Senior Executives to advise on strategy, business planning and shaping the approach to data engineering across the NSC, while also remaining closely engaged with the technical work. The ability to communicate well with business areas is an essential and critical requirement of this role.

Working closely with NSC Subject Matter Experts, Data Scientists and Data Analysts, you will be required to: 

  • Contribute your expertise to development of data engineering solutions for ingestion and ETL of data assets critical for NSC projects and ongoing business
  • Establish a streamlined process for the formation of joined data and manipulation to deliver business ready data for use by the many teams across the National Skills Commission (NSC)
  • Develop and lead data engineering solutions to join and transform data assets critical to delivery of NSC projects
  • Review and refine existing engineering solutions on the NSC’s data and analytics platform
  • Provide advice and guidance on risks and issues to non-technical managers and stakeholders
  • Provide ad hoc advice on data governance issues to data governance team members
  • Assist with data modelling on existing and business intelligence data assets to inform data architecture
  • Develop, implement and review data-related infrastructure, processes, and procedures
  • Work closely with our application, infrastructure and networking partners on development of infrastructure to support data and analytics projects, and integration between cloud and on-premise hosted services
  • Work under limited direction and be accountable for undertaking planning, analysis, design, development, and delivery activities within tight timeframes
  • Lead a small, technically-focussed team of Data Engineers (note: this team has not yet been established, however the additional resources are likely to be engaged as the NSC’s uptake of the Data and Analytics platform increases.)



Essential Criteria

  1. Data Engineering • Qualifications in Computer Science, IT or a related discipline • Data engineer experience • Demonstrated experience in developing, configuring, validating, and managing Microsoft cloud platforms, Data Factory, SQL Server, Databricks. • Understanding of Machine Learning concepts. • Demonstrated experience in DevOps and/or DataOps. • Documented, hands-on ETL development using tools like SQL Server Integration Services or Azure Data Factory.
  2. Data modelling and database management • Data modelling experience • Strong knowledge of database structure systems, data analytics, data mining and data modelling techniques. • Documented experience in data analysis and management, with excellent analytical problem-solving abilities. • Demonstrated ability to combine multiple large and complex datasets to deliver high-quality datasets.
  3. Coding and systems administration • Demonstrated experience in coding languages (such as SQL, Python, R, C Sharp/C++, Powershell) • Systems administration experience • Database / Data Lake management experience from design, development, delivery, validation and ongoing maintenance.
  4. Reporting and visualisation • Experience using and developing engineering solutions for data visualisation tools such as RStudio, Power BI, SAS, Qlik, Tableau. • An applied understanding of UI design concepts.
  5. Communication and Collaboration • Well-developed team collaboration and communication skills • Demonstrated ability to work under limited direction and accountability for completion of work. • Demonstrated ability to provide advice and develop solutions within multi-disciplinary work environments. • Demonstrated ability to communicate complex technical concepts to non-technical staff.

Desirable Criteria

  1. • Experience with the deployment and maintenance of machine learning models. • Experience developing datasets API delivery for a variety of scenarios • Experience with machine learning technologies. • Apache Spark (Scala or Python) experience. • Spark SQL coding experience. • Automation experience. • End-to-end Machine Learning pipeline development. • Proven experience working on Data Science projects.
Apply To Position