Data Manager/Modeller/Architect - Remote working
As a member of the Data Architecture Team you will be a part of implementing data projects against the central initiative. You will work closely with other technology, project and delivery leads to delivery on target architectures, do proof of concepts and inspire change into the business.
- Support development of the data architecture with requirements gathering and use cases and demonstrating knowledge from market, external partners and research.
- Support and enable the build and maintenance of ETL processes, including data quality, documentation and testing.
- Support the building and maintenance of data pipelines from source to data lake and on to the data warehouse & presentation layer.
- Ensure solutions and data management processes are consistent with Enterprise data models and strategy.
- Facilitate the execution of the roadmap and vision for information delivery and management, including the enterprise data warehouse, data lake, BI & analytics, and data management
- Ensure the ease of usability, flexibility & scalability of data lake and warehouse.
- Provide input into data management practices including data classification, data retention, data usage etc.
- Support definition of data security procedures on data access, handling, usage and storage.
- Support delivery and migration of new data software, applications, pipelines and services.
- Support creation and maintenance of data models & data relationship diagrams
- Experience of supervising data sources, data lineage, data quality, completeness, redundancy vs. replication, data dictionary, data models, metadata and master data
- Experience of gathering and analysing system requirements focused on the data elements across capture, processing, storage and distribution.
- In-depth understanding of database structure and principles across both relational and no-SQL database technologies e.g., MongoDB, PostgreSQL etc.
- Experience of AWS or Azure data platform. (S3, Redshift, Athena, Glue, RDS, DynamoDB etc)
- Knowledge of Python/Spark/PySpark
- Experience in POCs on new and emerging technologies
- Experience in data migration projects
- Familiarity with data visualisation tools, including PowerBI
- SData modelling experience including for Normalised, de-normalised (Star schema) and NoSQL.
- Experience developing Extract, Transform, and Load (ETL) specifications and mappings to support data conversion and integration
- Experience with metadata management approaches and tools
If the role sounds of interest and you feel you have the experience required please send me your CV and I will call you to discuss the role in more detail.