Move to the Cloud with the Online Accelerator by PowerObjects
Organizations hosting Microsoft Dynamics 365 have options. It can be hosted online, on-prem or using a hybrid model. Microsoft and PowerObjects strongly recommend online. Based on years of experience and hundreds of cloud migrations for our clients, PowerObjects created an offering that takes your organization through the process step by step by leveraging the power of both Microsoft Azure and the Microsoft Online Accelerator, a tool designed specifically to deliver an ultra fast migration of all your Dynamics 365 data. Watch this video to learn more!
Integrating to Dynamics 365 and Record Ownership
At PowerObjects, we often find that clients desire the option to perform data migration on their own. This may be due to the sensitive nature of the data or because of an effort to ensure internal resources know the migration/integration packages well. Regardless of the reason – and no matter the experience and skill levels of your organization’s data integration teams, pitfalls exist for those not experienced with Dynamics 365. As such, we insist on the allocation of time and resources to provide education and training to the internal teams responsible for the ongoing integration. Otherwise, your business may face multiple trial-and-error data runs to get it right.
One common mistake we see stems from failure to set the Owner of records when creating them in Dynamics 365. By default, Dynamics will set the Owner of each record to whichever User record was used during the migration. Often, this is an admin user with the System Administrator role – we’ll call it “Admin” for the sake of this blogpost. The records will import without issue and record counts will yield expected results. Great, right? Not so fast, as this is where a full understanding of record ownership in Dynamics is crucial. When a user logs in from a different business unit or even the same unit but with lesser permissions, they may not see all the data! Not good.
See, records are owned by users; each user resides in a single Business Unit and is assigned Security Role(s). The combination of Business Unit and Security Role(s) determines which records each user can see. Of course, if all users are granted Organizational access to records, you’ll likely never have this issue. Likewise, if your organization has only a single Business Unit, this potential issue is not applicable. However, many businesses – especially larger organizations – require multiple Security Roles and Business Units.
Let’s consider a scenario in which an organization has a Business Unit structure with a top-level unit named “Contoso.” A child Business Unit of Contoso is “United States,” and children Business Units of United States are “East” and “West.” Records imported with the owner as a system administrator (Admin) who resides in the Contoso Business Unit means that all the imported records will live in top-level Contoso. When other users are added to East, West, or United States, they will not see the records in the system if they are given Business Unit (or lower) read access to the data.
Without testing visibility of data with roles other than Admin, you will not know you have this issue. And considering that your data migration will likely contain multiple jobs for multiple entities, as well as references to other entities, you could end up having to reload all of your data. With the possibility of millions of records in your migration, that data reload could set you back several days or even weeks. Also consider that not seeing the need to have all of your users loaded in Dynamics 365 could lead to a scramble to get all personnel added as users and making updates to the migration jobs.
Now, if you’re thinking No worries… I can just update the owner of the records with a subsequent job, you’re correct in that being a possibility. But… when you go to load your production environment, you will be adding time to your migration to make a second pass on all the records. Why not just avoid that problem entirely by setting the Owner appropriately from the very beginning?
Hopefully this post helps you avoid having to learn this lesson the hard way. For more Dynamics 365 tips and tricks, be sure to subscribe to our blog!
Salesforce to Dynamics CRM Activity Migration
Congratulations! You've made the decision to switch from Salesforce.com to Microsoft Dynamics CRM! We may be biased, but we're pretty positive that you are going to love the features and benefits of Dynamics CRM! Now that you've made the switch, you may be wondering how you can migrate your previous Salesforce.com tasks into your new CRM activities. Well, in today's blog we have those answers for you! So without further ado, let's begin!
For the created look up tables, we used Scribe Insight to populate the values. You will need to get all of your CRM records and insert SF ID and CRM Guid in the new staging mapping table as well. IDs in Salesforce are made up of an 18-character-long, case-sensitive string and the data source table has several key fields that will drive our logic. The first three characters identify which Salesforce entity the records belongs to. As you can see from the image below, all records in the ID column start with "00T", which translates to task.
This table illustrates some other prefixes commonly used by Salesforce.
For this example, we load source data into the SQL Server staging data base. This makes the processes much easier to implement by creating lookup/translation tables for the entities that were loaded into the CRM system previously, thereby eliminating the need to make expensive calls to the CRM services in order to populate lookup fields in the new records.
For instance, when the user import is performed, we create a two fields table:
You then do the same for all other entities previously loaded that you need for a successful Activity assignment. Going forward, you simply need to construct a SQL statement to get the source rows and then prepare them for the CRM import.
All that's left now is to map the other fields accordingly, test, and run. When the job is finished, you will have all the tasks set and assigned to the appropriate records in CRM!
That's all for our blog today! Still not sure if making the switch from Salesforce.com to Microsoft Dynamics CRM is right for you? Let us convince you!
Migrating Email Body with Line Breaks
Microsoft Dynamics CRM allows users to send emails directly out of CRM itself. Those emails can absolutely be transferred over when migrating from a legacy system to Dynamics CRM. However, sometimes during a migration, if the body of the email in your legacy system is in plain text, the formatting is lost in the migration because CRM expects an HTML format. In today’s blog, we will discuss a few things to consider when migrating an email body during a data migration from legacy systems to Dynamics CRM.
When the body of an email is in plain text, you lose its formatting after a migration because CRM is expecting an HTML format. By utilizing direct mapping, the email body gets imported into CRM without line breaks. Technically, at the CRM database level, the data will have line breaks. However, when presented in the CRM interface as shown below, all line breaks will be lost.
By embedding the body in <PRE> </PRE> tags (which are used to define preformatted data), you are able to migrate the email data without losing the line breaks. As you can see in the image below, your email body is now formatted correctly.
This is how the source query would look:
SELECT '<PRE>' + [emailBody] + '</PRE>' FROM [toCRMEmail]
Another solution that will also work is to replace SQL Char(13) and Char(10) with <BR> tags, however, using <PRE> tags ensures that you won’t have to worry about any other formatting issues and performance issues that may arise.
That’s all for today! Remember that if you run into issues with this or any other CRM functionalities, you can always open a support request with PowerObjects. Thanks for stopping by our blog today!
Migrating Notes via Scribe Failing in Dynamics CRM
Are you using Scribe Insight to migrate Notes (annotations) and their associated attachments, but noticing the attachments are getting corrupted in the process? If so, then you may have just found the answer you've been looking for.
Here are a couple of the errors that we've noticed depending upon other file type:
"Windows Photo Viewer can't open this picture because the file appears to be damaged, corrupted, or is too large."
"We're sorry. We can't open xxxx.docx because we found a problem with its contents."
Additionally, some attachments open within a browser showing some html code similar to this:
<?xml version="1.0" ?>
-<error >="http://www.w3.org/2001/XMLSchema" >="http://www.w3.org/2001/XMLSchema-instance"> <exception>Unhandled Exception: System.ServiceModel.FaultException`1[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, Version=18.104.22.168, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]: System.Web.HttpUnhandledException: Microsoft Dynamics CRM has experienced an error. Reference number for administrators or support: #C2BB0118Detail: <OrganizationServiceFault > <ErrorCode>-2147220970</ErrorCode> <ErrorDetails /> <Message>System.Web.HttpUnhandledException: Microsoft Dynamics CRM has experienced an error. Reference number for administrators or support: #C2BB0118</Message> <Timestamp>2014-06-02T22:03:32.0499118Z</Timestamp> <InnerFault> <ErrorCode>-2147220970</ErrorCode> <ErrorDetails /> <Message>System.FormatException: Microsoft Dynamics CRM has experienced an error. Reference number for administrators or support: #F08D964A</Message> <Timestamp>2014-06-02T22:03:32.0499118Z</Timestamp> <InnerFault i_nil="true" /> <TraceText i_nil="true" /> </InnerFault> <TraceText i_nil="true" /> </OrganizationServiceFault></exception>
<description>An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.</description>
<details>Exception of type 'System.Web.HttpUnhandledException' was thrown.</details>
We've uncovered what appears to be a bug in the most current ODBC SQL Server Native Client drivers. If you are connecting directly to the CRM SQL Database as the source, it is recommended to use the Native SQL connection as shown here.
The native SQL connection requires an actual SQL user. If you are using an AD Account, you'll need to create an ODBC connection. This is where the bug has been uncovered regarding attachments. Do not use the two ODBC drivers selected in yellow below. You'll need to use the standard SQL Server driver noted in red.
Using the Native SQL ODBC drivers will likely result in corrupted or unreadable file attachments.
In addition, in order to use the SQL Server driver shown above, you'll want to be sure to have the DocumentBody be the last field in your source query. Something similar to the query below will get you the Owners name as well as the other fields necessary to migrate this data.
As you've likely found out already, you need to be sure you are using the correct combination of Virtual Fields for this type of migration to work correctly.
Here are the correct field mappings if you're using a SQL Query as your source. In this example, we're migrating Notes and their attachments from CRM2011 to CRM2013 OnPremise.
First, notice that DocumentBody is the last field to show in the source.
Second, be sure that if the ObjectTypeCodes for any custom entities are not the same in your source as they are in the target, you'll likely need to use some nested IF statements or a cross-reference table to populate this field.
Third, notice the two Virtual Fields (vf) that are mapped for the attachment.
Please note that if you are using the Dynamics CRM Scribe Adapter to connect to your source, then you'll need to use a different set of virtual fields.
For additional troubleshooting ideas, visit our myriad of blog posts. As always, if you need additional assistance with Dynamics CRM, don't hesitate to reach out to PowerObjects.
Migrating Dropdown Values into CRM with Scribe
While migrating data from a legacy system to Microsoft Dynamics CRM, you might run into a situation where you need to migrate dropdown values from a source system to CRM using Scribe. This task may take some time given the fact that in Scribe, you can use "if" conditions to accomplish the task. It can become tedious to write "if" statements for a long list of dropdown values.
To overcome this, you can use the DBLOOKUP2 formula. You can query on the stringmap table which stores all the mapping between dropdown text and values.
Here are the steps to accomplish this:
You need to use Scribe Workbench.
Open up the Scribe package.
Configure the target as CRM.
Place the following code in the formula and click Ok: DBLOOKUP2( "Your CRM SchemaName of optionset attribute ", " Your Source Field (S11)", "Your Connection Name", "StringMap", "AttributeName", "Value", "AttributeValue" )
Now you can test the Scribe job.
Scribe is a powerful tool to migrate /integrate data to Microsoft Dynamics CRM and make your migrations a lot less cumbersome. If you found this blog valuable, check out our other blogs about migrating data into CRM with Scribe!
Dynamics CRM Data Migration Tip Using the Import Sequence Number Field
Most entities in Microsoft Dynamics CRM come with an out-of-the-box field called Import Sequence Number. This field is ignored most of the time as it is not shown in the form by default, but this is a very handy field for data migration.
The import sequence number is a whole number field. The range can be customized if needed. The basic idea behind this field is to store the sequence number (ID) of the source record during data import to CRM. If this field is mapped during migration package/script design, it provides a one-to-one link between source row and destination CRM record.
How can we use the Import Sequence Number field with bulk operations?
With bulk operations, we can send a batch of records to CRM at once and let it handle on its own. Bulk operations (as opposed to row-by-row operations) make it more difficult to handle failed rows as operation happens in bulk. If you do use bulk operations, you’ll need to design some alternative ways to handle the failed rows. This also applies to updating source records after they are migrated.
If you have on-premise or partner-hosted CRM and with access to a staging SQL database, you can make the best use of the import sequence number during data migration. It is highly recommended practice (if applicable) to stage the source data before loading to CRM.
There are a few advantages of this approach:
Data can be cleaned/flattened in staging tables before loading.
Lookups and references can be resolved in staging scripts.
Unique ID can be assigned to each source records. Additional columns can be created in staging tables if necessary. For example, you can create a CRM_ID (uniqueidentifier) column for storing the GUID of the record created in CRM, or create an UpdateFlag (bit) column to flag the records that need to be migrated.
Proper indexes can be created in the staging table which helps in performance.
It’s good practice to design the staging tables with an integer identity ID column and make it a primary key so it will create a unique clustered index by default. When designing the migration script/package, map this ID field to the importsequencenumber field on CRM entity. This will create a one-to-one link between source row and CRM record. This comes very handy when there are any issues and records fail for some reason.
There is another good use of the ID field. It is usually a true statement that when multiple threads (packages) are run in parallel, there is a boost in migration performance. The ID field can be used to group records into different buckets so the individual package processes the records from its assigned bucket. The source query would look something similar to this for package #1:
Select * from Staging_Payment p with(nolock) where UpdateFlag=1 and ID<=1000000
And looks like this for package#2:
Select * from Staging_Payment p with(nolock) where UpdateFlag=1 and ID>1000000 and ID<=2000000
And so on.
At this point, you can kick off the bulk operation…don’t worry if records fail. Update the staging table with CRM record GUID and set the UpdateFlag to 0. (UpdateFlag=0 means that the records are already processed so it will not be picked up again by migration package.)
Here is sample code to update the source using importsequencenumber and ID:
UPDATE p SET p.CRM_ID=b.po_paymentid ,p.UpdateFlag=0 FROM CRM_MSCRM..po_paymentBase b with(nolock) JOIN Staging_Payment p with(nolock) ON b.ImportSequenceNumber=p.ID WHERE p.CRM_ID IS NULL AND UpdateFlag=1;
Once the update source completes, rerun the package/script again to process the failed rows, i.e. the rows with ID in staging that is not in the CRM Import Sequence Number. Update source is very important for avoiding duplicate record creation in CRM if the migration packages are run multiple times.
If you use Scribe for migration, the pre- and post-processing feature can be used to automate this update source process.
Create a SQL script file with the above update script and save it in the same folder as the migration package.
Browse this file to run BEFORE and AFTER the job runs as below:
If SSIS is used for migration, then the above update script can be used in ExecuteSQL tasks before and after the Script component or Script Task whichever is used for calling CRM web service.
These are some of the little tips/tricks that can save you a lot of time during data migration time especially for large sets of data (in millions). You can read plenty more on migrating data in CRM on our blog!
Manually Editing the Scribe QETXT.ini File for Microsoft Dynamics CRM
When migrating large text files with many columns in Microsoft Dynamics CRM using Scribe Insight, sometimes it's more efficient to manually edit the QETXT.ini file. A good example of this would be when migrating from text file exports from Salesforce.com to Microsoft Dynamics CRM.
How many times have you set up a connection to a text file using a sample file only to later get the full data set and realize all of your columns are too small or of the wrong data type? You can save yourself time and energy by manually editing the QETXT.ini file. The QETXT.ini is a text file that is created automatically when using the Scribe Text Wizard to create a connection to a text file.
To see an example of the QETXT.ini, you can navigate to the TextData samples that get installed with Scribe Insight.
In this example, we will be using the sample Company.dts that Scribe provides to illustrate how to change the QETXT.ini. When using the Scribe connection wizard to define text connections, it will define the length of any characters fields by default to the longest value in the current file. We want to change all character columns to a length of 100.
First let's look at the column definitions prior to editing. Notice the column lengths vary and were likely set automatically by the column lengths at time of first connection.
Now open up the QETXT.ini and edit the column lengths. Notice each new text connection is started with the connection name in brackets [ ].
For each VARCHAR field, change the length to 100 in two locations.
Repeat until all VARCHAR fields have been updated.
That's it! Once you re-open the Scribe job, you will see that all of the column lengths have been updated.
Scribe Queue Publisher vs. Query Publisher for Dynamics CRM
There are multiple ways you can connect to Microsoft Dynamics CRM as a source from the Scribe integration tool. The ones that are frequently used are the CRM Publisher or Query Publisher. Today we'll go over the difference between these two and when to use them.
The CRM publisher uses the plug-in logic to push data from CRM to the Scribe server. Scribe then picks up the XML file and processes it. In this process, the data goes from the CRM server to the Scribe server almost instantaneously. This method is most useful if you need to process every single record separately and you need to integrate the data immediately. At the same time, you have to be careful with the data load since huge data can cause the Scribe server to not respond properly.
Here is a screenshot of the CRM publisher selection:
With this option, you will probably use the CRM filtered views to run your SQL queries as the source. You'll need to schedule the Scribe processor to run at defined times, which means these are time-based integrations. You can use this method when you only need to integrate the summary and can integrate the data during off-peak hours. Doing this can give you better CRM performance.
Hope this helps! For more information about using Scribe, check out some of our other blogs:
There may be a scenario when working with Scribe Insight and Microsoft Dynamics CRM when you have to integrate data to multiple locations at once—for example, if you're syncing data from an external system to a production as well as test instance of Microsoft Dynamics CRM. In order to save time and to ensure identical data in each environment, the best practice is to create one step and then make a copy of it rather than creating two each step by hand.
1. Once you have completed the design and build of the first step, you can make a copy of it by going into Flow Control under the Configure Steps section and using the copy step button.
2. After copying the step, rename it with a relevant name so you can identify it later. Notice the Connection says MS CRM Test Environment. There is no way to change this from within the Scribe Workbench.
3. At this point you have one step for your MS CRM Test Environment and another step ready for your MS CRM Production Environment, but they are both still pointing to the same test instance of CRM. This is where we can use DTSEdit to modify the CRM instance for the second step. First we need to make sure we have a connection defined, so let's take a look at our connections.
As you can see, we have two connections to Microsoft Dynamics CRM 2011. The MS CRM Test Environment is currently being used as the source and target. We are going to use DTSEdit to adjust step two of our Scribe job to point to MS CRM Production Environment. Go ahead and close out of the Scribe Workbench at this point.
4. DTSEdit is a tool provided by Scribe Software that allows you open and edit Scribe DTS files in a spreadsheet-based fashion. It should be found in the Scribe Folder under Program Files.
5. Double click on dtsedit.exe to open the application and open up the Scribe DTS file we have been working with.
6. As you can see, DTSEdit is laid out much like Excel—columns and tabs inside each sheet and multiple tabs across the bottom of the document.
There are four sheets that will need to be modified to adjust the connection for step two of our job.
We will cover details of each step. There is a ton of data available in the DTSEdit tool. Feel free to play around and try different things, just be sure to have a backup of your DTS files before you do!
7. Now it's time to work with the data providers sheet. The data providers sheet lists the connections stored in the Scribe job. We have already defined our new connection, so at this point we just have to tell it that our connection will now be used as a target. Locate the Data Providers Sheet and the connection we defined as MS CRM Production Environment.
Put a "T" in the Connection Value Column next to Usage to denote this data provider is now being used as a target.
8. Locate the Target Tables sheet. This sheet lists all target steps contained in the job. Locate step two and the Data Providers column that denotes MS CRM Test Environment. Change that column to MS CRM Production Environment.
9. Locate the Keys sheet. Keys are synonymous with lookup links in the Scribe Workbench. Find any key that is part of step two.
Update the data provider columns to match the MS CRM Production Environment.
10. Locate the links sheet. Update the Data Provider column to reflect the new MS CRM Production Environment provider where necessary. There will typically be many more rows to update on this sheet.
Keep updating until you've completed all the links for step two.
11. At this point that's it! You can now re-open the Scribe job in the Scribe Workbench. You will know very quickly if you missed anything as you will see errors upon opening. If you open your Connections or Configure steps, you will notice that they have been updated to reflect the MS CRM Production Environment.
As you can see, DTSEdit can be a very powerful tool when creating integration and migrations with Scribe Insight to save time and reduce errors!
Format Phone Numbers with Leading Zeros via Scribe Insight in CRM
If you need to migrate data to Microsoft Dynamics CRM where the phone number field(s) contain leading zeros, this blog is for you! Using a migration tool such as Scribe Insight in CRM, we can use a series of functions/formulas to address this issue.
The following function will not only handle the leading zeros, but if there are exactly 10 numeric characters in the source, it will also format it as (xxx) xxx-xxxx. If there are not exactly 10 characters in the source, the function will simply populate the raw value from your source.
Here is the formula. You'll want to replace all of the "S8" values with the appropriate field from your source.
Activity Parties and Dynamics CRM to CRM Migrations with Scribe
Activity Parties in Dynamics CRM are one of the most complex entities to work with regarding data migration. If you don't know, activity parties are the entities that contain the relevant data for the party list fields on Emails, Appointments, etc. (Check out some additional information about this entity, which have been around for quite a while.) These fields typically have more than one record associated to them, and the activity party entity record for the field contains this data pointer. The nature of this relationship makes them rather difficult for data migrations, and information is not terribly plentiful, so let's chat about them!
Let's take email records, for example. We have several activity party types we will need to work with here: the To:, Cc:, and Bcc: fields being the main ones. For each email record we migrate into CRM, we need to associate the To, Cc, and Bcc fields with the appropriate activity party record to get them migrated in with the proper recipient data.
There are a couple ways you can do this:
The best way to do this is to migrate the email records over to from the source to target CRM with the same GUID on both sides. Then you can simply run a migration of Activity Party records and directly associate the GUIDs. This is in effect taking a replica of the data in the source CRM and migrating it to the target CRM, keeping everything the same. You would map activity party ActivityID, PartyID, and PartyTypeCode to one another directly. You would then set the activityfieldname to be the field you want records written for (ie. To, Cc, or Bcc). You would need to change this value for each run to get all records over, as if you specify To, then Cc and Bcc will fail, and you will have to run them next.
You could also try to define logic to incorporate all these fields into one mapping, if you would prefer. ActivityTypeCode would be set to 4210 for emails. Also ensure that your PartyObjectTypeCode is identical to the source, else you will have to create a file lookup to mitigate differences.
If you do not have the luxury of maintaining the same GUIDs as the source CRM, you can do this in a different way. The different way would be to first do an Insert or Insert Update of the Email record, and then as a second step, do an Insert of the associated activity party record. You cannot update activity party records, however.
As always, if you have any questions on how to do this, PowerObjects Scribe wizards are available to help!