Factors to consider when migrating
Suggest editsYou'll want to think about these factors when analyzing and planning for your migration:
- Schemas
- Data
- Infrastructure
- Applications
- Operations
Schema migration considerations
The schema defines the structure of the database. Schemas include the definition for tables and other structures, like sequences, indexes, constraints, and views. Although the syntax that Postgres provides for creating these objects is often similar or identical to that of Oracle, there are differences that you might need to address when migrating them. There are also differences in how Oracle and Postgres organize and store schema objects that will be important to understand.
Data types are used in the tables and in the other objects that are used by the database. For a migration, you need to think about how data types in Oracle map to those in Postgres.
A database schema also contains code objects, including packages, procedures, functions, and triggers, that are written directly in the database. In Oracle, code objects are written in Oracle's PL/SQL language. PostgreSQL has its own built-in programming language that isn't compatible with Oracle's. However, EDB Postgres Advanced Server does provide a built-in implementation of PL/SQL, known as SPL. Although EDB's SPL covers a large percentage of the most commonly used Oracle PL/SQL constructs, it doesn't implement the full set of Oracle constructs. As a result, you might need to convert some Oracle code objects to work with EDB Postgres Advanced Server.
There are often syntax differences between how the source and target databases interact with the data. Oracle has certain conventions and keywords that it uses in its version of the Structured Query Language (SQL) for selecting, inserting, updating, and deleting information that aren't supported in Postgres that you'll need to be aware of.
Data migration considerations
While the schema provides the structure, the data is the content of the database. One thing you need to think about is how to move the data to the target database. The method depends on how much data you need to move and how much downtime you can afford. You'll need to decide whether bulk loading is right for you, meaning you'll get a snapshot of data from the source system, transfer it to the target system, and move on from there. Or, you might need to keep the source application and database up and running while migrating the data to the target database. Since the source database is still receiving updates, you want to be able to continue to move that data to the target database.
Fallback is another consideration. As you go through a migration process, you'll eventually complete the migration. As you finalize testing of that migrated system, you'll often want to have a fallback position in case the migration wasn't entirely successful. After beginning operations in the migrated system, you can fall back to the original source database and source application more easily if the legacy database is being kept in sync with data changes being applied in the migrated database. Your fallback plan must ensure that new data entered into the migrated database can make it back into the original source database in a timely manner.
For migrating data, in addition to the core tools we offer for pushing data from the source database to the target database, you might also use tools to support more advanced data migration scenarios, such as more complex extract-transform-load (ETL) needs. These tools pull the data out of the source database, transform it, and then load it into the target database.
Once the data is in the target system, you want to validate that the data that you put in the target system matches the data in the source system.
Infrastructure considerations
Databases reside on infrastructure, so you'll have to consider the hosting environment for the source database as well as consider what the target environment will be. For instance, if your current database is deployed on premises, you might decide to migrate to another database also running on premises. However, more and more organizations are considering moving from an on-premises deployment of a database to a cloud-based deployment. If a cloud-based deployment is the ultimate goal, you might decide to migrate the database directly from one running on premises to one running in the cloud. Or you might first migrate the database to one running on premises and then move the migrated database to the cloud.
If migrating to an on-premises database, either as the end state or as a stepping stone to the cloud, you need to make sure you have the server resources on hand to perform and test the migration and to host the operational database and related tools. If migrating to the cloud, your needs are similar, but you will likely have more flexibility to add and remove servers as needed to support the migration process.
Another factor to consider is that many legacy application databases were deployed on specialized hardware and operating systems. This factor might influence the types of tools and approaches used for a migration given that you'll likely be targeting server options. You'll need to make sure the tools you plan to use can either be deployed on these systems or can access them to retrieve the information to migrate.
When considering a migration to on-premises resource, the deployment options available for EDB Postgres Advanced Server are:
- Physical servers
- Virtual machines
- Cloud-native Postgres containers in Kubernetes
- Private cloud instances
Traditionally, databases were installed on physical servers to optimize performance. Over time, as virtualization technology improved, it became more common for databases to be installed on virtual machines. In the last few years, containers have begun to emerge as a viable option for database deployments, especially as the industry has more or less standardized on Kubernetes as an orchestration framework and database vendors like EDB have developed Kubernetes operators for deploying and maintaining databases in containers.
For migrations to the cloud, the following are the deployment options for EDB Postgres Advanced Server:
- BigAnimal cluster
- Self-managed EDB Postgres Advanced Server installed on cloud-based server instances
- Cloud Native PostgreSQL (CNP)
More and more, organizations are interested in moving to a managed database service (also known as DBaaS) to take advantage of the benefits this service provides to businesses. BigAnimal is the EDB Postgres DBaaS offering available on the major commercial clouds that provides the option to easily deploy either PostgreSQL or EDB Postgres Advanced Server database clusters. If you want to migrate to the cloud but want or need more control over managing your database instances and the underlying servers, you can install and manage EDB Postgres Advanced Server on your own in cloud-based server instances. EDB's Cloud Native PostgreSQL (CNP) lets you deploy and manage PostgreSQL or EDB Postgres Advanced Server clusters in Kubernetes-orchestrated containers. Widespread adoption of containers for databases is still in its early stages. However, as applications shift toward microservices, the use of containers for databases is increasing. Kubernetes clusters can be stood up in the cloud. In fact, the major cloud providers even offer managed Kubernetes services.
When you have different source infrastructure and target infrastructure, consider that you might have optimized your source system based on the infrastructure. When you move to the target system, you might have to do similar optimizations but using different techniques.
Application migration considerations
An application sends information to the database and retrieves information from the database. Most applications that use a relational database management system (RDBMS) backend, such as Oracle or Postgres, issue database queries in the Structured Query Language (SQL). The essential components of the SQL standard were adopted by all major relational databases. However, each of them have features that deviate from the standard and are different from one another. As such, some conversion of the SQL code embedded in the application code, either statically or dynamically generated, is likely required as part of the migration efforts.
In addition, applications usually are tuned and configured to perform well to work with the legacy database system and operating environment. When you move from the old system to the new system, you might have to make some changes to your application to support running well against the new database and operating environment.
Operational considerations
Migrating a legacy database to new database technology results in the need to update operational processes and tools to work with the new database. In the case of migrations of Oracle to Postgres, any Oracle-specific tools that are being used will need to be replaced with Postgres-specific tools. Some of the major operational aspects to consider include high availability, backup and recovery, monitoring and management, security, and encryption.
Also, when legacy databases are running on older specialized hardware and operating systems, a database migration might also require migrating the underlying host systems to modern platforms and operating systems. For on-premises migrations, this can result in the need to change your operations and maintenance procedures and tools to work with the new server platforms and operating systems. For migrations to the cloud, some of the operations and maintenance activities will be handled by the cloud service provider but others will still be your responsibility. Likely, those activities will require different tools and procedures from the ones you're currently using.
Some best practice considerations
The following are some best practices to consider when planning for your Oracle-to-Postgres database application migration:
- Don’t skip or skim over steps in the migration journey.
- Do the planning.
- Identify and understand the functional and nonfunctional requirements.
- Complete the solution design early to understand IT resource needs.
- Use development and test environments to help identify and resolve migration issues prior to migrating to a production environment.
- Get application developers and system administrators involved early in the migration journey to work with the database team.
- Set up an environment where the application can be tested with the database in a manner that closely approximates the production environment and usage.
- When migrating the database schema, defer loading indexes and constraints until after migrating the data.
- Engage the help of migration experts and PostgreSQL experts for more complex migrations.
- Ensure operations personnel are trained on the new system prior to post-migration operations.
Could this page be better? Report a problem or suggest an addition!