Although we are considered to be allround data migration experts we do have the following specialisation both branch as content wise:
Insurance companies and pension funds have to innovate and reduce costs in order to survive. Moreover, legislators force these companies to provide periodic information to supervising authorities (e.g. DNB and AFM). However, providing this mandatory information, reductions in costs and innovation are obstructed by using legacy systems. Data migration to a new system is the only logical solution to maintain competitive.
Metaverses is a specialist in data migration for the financial industry. By combining industry knowledge of the pension-, life-, and non-life insurance and extensive data migration experience from the publishing industry, an innovative company has emerged with a unique approach towards data migrations in the financial sector.
We make data migration become easy by combining business knowledge with the right way of working, supported by the best tooling. By rethinking the whole approach, we developed an efficient, low risk, auditable way to safely migrate your data to modern systems. We can ease all your data migration concerns and reduce costs and lead time by up to 50% compared to most common practices.
- Migrating closed cases (static information) and open cases. Metaverses transforms the data into the right standard format for the E-depot and can do various checks (ingest rol)
- Migrate one time, multiple times or at a regular basis (even by creating more permanent interfaces between existing (EDMS) systems and the E-depot)
- Both structured or non structured data can be used
Migration to Cloud
Moving sensitive data is one of the biggest problems for companies moving to the cloud. Research has shown that the care of many organisations depends on the time needed to migrate data (nearly half of them see this as an obstacle). Companies have good reasons to worry about delays in data migration. Delays due to complexity of the transition.
The Metaverses approach results in quick understanding of data (quality) related issues, including direct result checking during iterative mapping sessions. Issues can be addressed immediately (and often solved) so that the number of unexpected issues is dramatically reduced.
Improvement data quality
To quickly gain insight into the data quality, Metaverses uses innovative data profiling tooling. Based on the quality issue we can identify which data needs to be improved manually or which issues can be resolved fully automated.
For automated data quality improvement or data enrichment Metaverses can also use external sources (which are known within a branch). Via API connection or just data dumps we use this data directly in our mappings, so that the conversion also results in better data quality. Of course within our mapping we can also define automated transformations to improve data quality.
Using business intelligence (BI) tooling we can provide you insights based on the data which can support your decision making process.
Structuring unstructured data
A lot of knowledge within your organisation is anchored in documents (unstructured data). The number of documents grows exponentially so it will get harder to find the document you need. Metaverses is using tooling to extract topics like date, version, author or subject directly from documents to generate metadata which can be stored, classified and searched for.
Currently we are also looking how Machine Learning and Artificial Intelligence can improve and further automate this process. Results of these techniques will of course be better and faster available if data quality is high.
Metaverses can with ease reduce the identifiability of individuals from the original dataset to a level acceptable by your organisation’s risk portfolio. We offer two individual services to help you with your data anonymisation. Metaverses advises which methods suit your dataset and how your data could be anonymised and we make/help you to make the anonymisation of your data a reality.
Metaverses uses various techniques to fit your need, like:
- Character masking (hiding either a fixed or variable amount of characters)
- Pseudonymisation (replacing identifying data with made up values)
- Swapping (rearranging the data until it does not correspond to the original records
Data anonimisation can be used for data repurposing and integration tests a.o.