The Oasis Loss Modelling Framework provides an open source platform for developing, deploying and executing catastrophe models. It uses a full simulation engine and makes no restrictions on the modelling approach.
Models are packaged in a standard format and the components can be from any source, such as model vendors, academic and research groups. The platform provides:
- A platform for running catastrophe models, including an exposure/results database, a web based user interface and an API for integration with other systems (Oasis Loss Modelling Framework)
- Core components for executing catastrophe models at scale and standard data formats for hazard and vulnerability (Oasis ktools)
- Toolkit for developing, testing and deploying catastrophe models (Oasis Model Development Toolkit)
There are three main ideas behind the Oasis platform architecture:
- We develop the core modelling and analytics components in C++. This allows low level optimizations in processor and memory usage and is proven for large scale and complex numerical applications.
- We use standard Open Source frameworks wherever possible, and in particular for job management (Celery, RabbitMQ), web services (Flask) and other enterprise features.
- We use standard Open Source DevOps tools (GitHub, Docker) for deploying all components and models.
All the code and components are available on GitHub and can be freely used under the permissive BSD 3 license, and tailored as necessary for specific use cases.
A major challenge that the Oasis platform architecture addresses is the flexibility to support a wide range of deployment options.
This allows usage by re/insurance companies and by modelling companies and academics who wish to deliver their models and consulting services to risk managers from within and increasingly beyond the re/insurance industry.
There are four main options for deploying Oasis:
- Deploy the entire Oasis platform. This can be done as an in-house or hosted solution.
- Deploy the API and model execution framework externally, and integrate with internal exposure management systems.
- Deploy the model execution framework to the cloud, which could be Azure, AWS, or a private cloud. This provides scalability and the ability to add capacity during periods of peak usage.
- Deploy specific modelling components in other applications, for example the financial engine could be used for exposure reporting.
All support requests should be sent to: firstname.lastname@example.org.