Tag Archives: dynamics365

PowerApps Component Framework – Beyond the boundary

Published / by AK / 1 Comment on PowerApps Component Framework – Beyond the boundary

Note: I am going to write some dark magic with PCF. It is not currently supported by Microsoft. These are my personal findings. Views are my own. But if you want to dismantle PCF and want to test the boundary, then read further.

Credit: Andrew Ly for his Number Button Selector control and feedback. Rami Mounla for proofread and correction on the content.

The source code can be downloaded at my github.

Intro

Microsoft has announced public preview of PowerApps Component Framework (PCF) and its Command Line Interface (CLI) in April 2019. Many developers, including me, have been anticipating this new feature. I find the documentation useful. A few people has started building and sharing new controls. No disappointment from this beautiful community.

Absolute need

TypeScript. Although I am not absolutely convinced the preference of TypeScript to JavaScript (by the way, JavaScript is my favorite), it is the only choice we have right now. However, Andrew Ly has pointed out essential areas where TypeScript really shines, such as strong typing, OO concepts thus bringing the gap between JavaScript and OO languages like C#/Java.

Beyond absolute need

There is no doubt you can build any control following Microsoft documentation. But, I don’t want to stop here. I want to push its boundary and limits. So far, I recommend adding 3 more types of knowledge.

2. React for UI components

When the sample from Microsoft documentation shows building the UI using TypeScript, it reminds me of my student days when I had an assignment task asking to build a UI in C++. I could not force myself to create UI in C++ program.

TypeScript for building modern UI? No, please.

I admit I was super lazy during my student life. I only wanted to deliver bare minimum. So, UI in code file is a NO for me. However, as my career began and I collected some experience, I allegedly become a bit wiser. Yet, the wiser me supports the lazy me.

There are many libraries to choose. However, you can find react and react-dom packges under node_modules after executing “npm install” at step 4.. So, React is the choice. You can also use JSX which makes life easier when building DOMs.

2. NPM for packages

The motto of “Why reinvent the wheel?”. NPM hosts thousands of free packages. Most of the well-established libraries are now distributing over NPM. It is like NuGet for JavaScript. Now, you may ask “why not reference the CDN?”. Well, that brings me to next one.

3. Webpack for bundling

A module bundler. It will bundle all your resources, and packages from NPM, as a single package. Referring to CDN will only work when there is connectivity (or it is dependency). If we want to use it offline in mobile devices, we may encounter some issues.

The good news is that you don’t need to worry about NPM and webpack. PCF CLI handles them gracefully in most cases. I did however notice that there are some issues with adding large NPM packages (>500kb). The bundling tends to behave weirdly.

Create a new PCF Project

You use “pac pcf init” command to create a new PCF project. It will pre-create ControlManifest.Input.xml and index.ts.

pac pcf init
pac pcf init

Create a react component

Let’s create DemoComponent.tsx and write some scripts. The sample source can be found at my github repository.

Project structure
Project structure

Run ‘npm run build’ command. You will get an error saying ‘–jsx’ is not set. To resolve this, you need to add “jsx”: “react” in tsconfig.json located under your root folder. After this, you will be able to run ‘npm run build’ command without any issue.

JSX error
JSX error
Update tsconfig.json
Updating tsconfig.json

Now, let’s render DemoControl in index.ts. You will see an error and you will not be able to run ‘npm run build’ command.

Render react component

Now, let’s change index.ts to index.tsx and you will see the error is gone from IDE. You also need to update the code file extension in ControlManifest.Input.xml.

But, when you run ‘npm run build’, you run into another issue saying ‘Code file needs to be a typescript (.ts)’.

Looking at the error, it comes from pcf-scripts module which is located under node_modules folder in your PCF project.

tsx error during build
tsx error

A quick search using the error message brings me to line 84 at controlcontext.js under pcf-scripts module.

Include tsx
Only allow .ts

You simply need to update ‘if’ statement to allow .tsx extension and save it. And Magic happens.

Include tsx
Add .tsx into file type

Now, if you run ‘npm run build’, it will succeed. Next, it’s time to execute ‘npm start’. When the browser is launched, you will see the error ‘React’ is not defined.

React error
React error

To address this error, open index.html under your pcf root folder\node_modules\pcf-start and add following two script tags.

<script src="https://unpkg.com/react@16/umd/react.development.js" crossorigin></script>
<script src="https://unpkg.com/react-dom@16/umd/react-dom.development.js" crossorigin></script>

As soon as you save your changes, your react component will be rendered.

React control demo

Congratulations! You have created a custom control using PCF and React.

The proceder in this article is experimental only.

Final word

Even though you can deploy React control to model-driven app, this approach is not currently supported by Microsoft. I am sharing this only for experimental purposes and showing how PCF can be extended using modern libraries. I am not a pro-React developer nor a TypeScript developer, therefore, I may have approached this the wrong way. If you have any comments, please share them below.

Six months after deploying my first PowerApps

Published / by AK / Leave a Comment

This post is more of rant than evangelist.

In October 2018, I had involuntarily involved in developing PowerApps for one of our clients at work. “Why involuntarily?” you may ask. Well, although I mainly work with Microsoft platform, I play around with iOS and Android native app developments and help out my friends. So, I fairly know the biggest challenge with mobile app development, making your app to work across different devices.

PowerApps is great. I have no doubt about it. All canvas-apps run inside PowerApps native app on your mobile, thus making it very different from cross-platform development in mobile app. It also means you have no control over the release of PowerApps native app.

Regardless of my strong objection, we managed to deliver the app in 10 days. The purpose of the app is to transform their paperwork of auditing farms. Basically, admin will create audit records in D365 and assign to auditors, then auditors will download records to their apps for offline use, drive out to remote areas for auditing (which also captures photo and signature). When auditors arrive back to their home at night, they sync their offline data to D365 and get newly assigned audits. Photos need to upload to their Google Drive. Why not? Microsoft Flow has out-of-the-box connector to Google Drive. Easy peasy.

Then, the client wanted to add a new functionality which will share photos uploaded in Google Drive to the audited farm. Although you can share files/folders in Google Drive, out-of-the-box connector is not capable of doing it. After going through Google Drive REST APIs, we solved it by creating a custom connector. To deliver all these functions in 10 days was impressive. It was working fine. Well … for two months.

In December, the client told us some of their auditors were facing issues with syncing records. We enhanced our app for a better performance. During the enhancement, it was all fine. When the app was re-opened again, it showed over 200 errors on one of global variables in the app. Suddenly! Set and Patch function on record type global variable didn’t work anymore. (You can find the similar issue here, though I didn’t report it).

I ended up with re-creating an app almost from scratch again. Tweak and fix. Again and again. One of the challenges with web player is you can’t test offline mode. SaveData and LoadData doesn’t work. I hope Microsoft will at least create stubs for these two functions for web player in the future. Having to test it on the device, you have to publish the app, and open the app at least twice in the device to see the latest version. It is not productive at all, but we managed to fix all issues.

But, there was another problem. Microsoft has released the new version of PowerApps and unfortunately it was a bad version. Without noticing the bad version, I kept working on the app. After publishing to test it on my device, it was too late. There was no way to return. If I rolled back to the working version, all my changes will be lost. If I stick to the current version, it can’t be tested. It didn’t look good at all.

Luckily, Microsoft released the bug fix over the weekend and the original issue was gone. But, again, the app was starting to behave weird. Functionalities used to work were not working anymore. For example, if I rolled back to the old working version, it worked. But, not on the latest PowerApps version. It is so frustrating for a developer. You have no control over which PowerApps runtime you want to choose. Microsoft dictates it. Because of it, one bad runtime version can create many issues even if you don’t make any changes.

During the enhancement, we started to notice errors in listing Google Drive files in Microsoft Flow. After searching Google Drive REST APIs, it turns out Google limits items returned from API to 100 by default and it can be changed. However, there is no way to define the paging size in out-of-the-box connector. We didn’t have the issue in UAT and early days of production go-live since items were less than 100. But, now we have an issue. As usual, a custom connector saved our life.

All issues seem to be sorted out, at least, for now. Hopefully, the app will behave nicely for next two months. But, I learned following lessons.

  • PowerApps is easy but not necessarily simple – a lot of gotcha and similar functions with totally different behaviours.
  • A lack of ability to control PowerApps version – one bad version will break your app into pieces just by publishing without making any changes. If it happens in development phase, it will stale the timeline.
  • Out-of-the-box connectors are useful for demo but not reliable for production – always reference to the source API documentation, not Microsoft Flow documentation.
  • Community is great but not all issues are resolved – looking at the example with Set and Patch functions in above link, it took about 2 months to recognise the issue by Microsoft PowerApps team. The worst part is it is not resolved yet.

Channel Integration Framework with Twilio – Part 1

Published / by AK / Leave a Comment

I have posted the video of incoming call using CIF here. It is a live working demo using Twilio trial account. I am going to share my experience of making this so anyone can easily setup the basic features and start exploring more awesomeness.

I have tested the public preview of Microsoft Dynamics 365 Channel Integration Framework. It was before Christmas and New Years holidays. Moving house over the holiday period kept me occupied for weeks.

Last week, I started to look into CIF again. During the public preview, I had issues with incoming and outgoing calls. I was able to connect to Twilio service. Outgoing/incoming calls that return pre-defined messages were successful. However, I couldn’t manage to figure it out to make a call to a real number.

In this post, I am going to cover D365 users responding to incoming calls directly from web interface – without leaving D365 at all.

Preparation

Before we start, please go through followings

Now, we can begin!

Setting up

You will find steps provided in Microsoft documentations are easy to follow. But I find problems with the section to create Twilio functions (https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/channel-integration-framework/sample-softphone-integration#create-function-to-use-with-the-app-service).

According to Readme included in the sample code, you have to create functions in Twilio. I followed the instructions and couldn’t manage to make it work. So, I started with creating a basic client and later add a couple of controllers to the sample code to setup a twilio client.

You can follow tutorials from https://www.twilio.com/docs/voice/client/tutorials/how-to-set-up-a-server-for-twilio-client to create a basic client including TokenController and VoiceController. Creating own controllers give us the whole capability of Web, flexibility and fine grain control of call processes, such as limiting the functionality, dynamic routing which could be stored somewhere else.

In Twilio console, it is very easy to get confused and lost in navigation. However, we can get the information we need from
– Twilio account dashboard at https://www.twilio.com/console
– TwiML apps at https://www.twilio.com/console/voice/twiml/apps
– Phone numbers at https://www.twilio.com/console/phone-numbers/incoming

In short, you need to

Once you have deployed the sample code, you need to configure your trial Twilio Phone Number. First, let’s create a TwiML to route and store it in TwiML Bin. You can find TwiML bin under Runtime or at https://www.twilio.com/console/runtime/twiml-bins/. For testing purpose, let’s create following TwiML.

TwiML
TwiML

You may ask “What the heck ‘ak’ is sitting between Client tag?” Well, it directly relates to the client name of token generated from TokenController.cs

TokenController
TokenController

Once TwiML is saved, go to your phone number and configure its A CALL COMES IN setting by choosing TwiML and choosing the newly created TwiML from Bins.

Phone Number Configuration
Phone Number Configuration

That’s all you need to configure to receive and answer the call using CIF. It takes a while to get it right. Reading Twilio documentations will help you a lot.

Enhancements

You would notice that I use a static TwiML to route to the client in Phone Number configuration. In real world, I think Webhook is a better approach where you can return TwiML dynamically according to implementation of business rules. In addition, TokenController.cs could do similar thing which generates dynamic token for each login user. That way, the call could be routed to the correct client.

Need help?

Reach out to me if you need any help and it is confusing.

Passing current login user of Microsoft Dynamics CRM Portal to external web app

Published / by AK / Leave a Comment

You can get the detail of current login user in liquid template va user liquid object. There is another way you can get the current login user via XHR call.

Microsoft CRM Portal has built-in API to generate JWT of current login user. The API is at https://<crm portal url>/_services/auth/token and returns JWT. This JWT is nothing but a JSON object encrypted using RS256 algorithm. So, anyone can decode it. Other words, anyone can encode it also.

You sometimes need to pass the current login user information to external web app. Since it takes very little effort to generate a JWT and pass it to your external website, it is very easy to bypass the security. Therefore, you will definitely want to verify the authenticity of generated token too ensure the token is generated from trusted source (in this case, your CRM portal).

The beauty with JWT is you can verify the signature of token using public key. If you are not familiar with PKI, the process generally involves the source or CRM portal which generates a token using its private key (which is already handled in CRM portal), and the target or your external web app which verifies the authenticity of the token using public key. To do this, get  the public key of your CRM portal at https://<crm portal url>/_services/auth/publickey.

The order of the whole process is

  1. Pass JWT token as a parameter in a web request/link to your external web app
  2. In your external web app, get public key from CRM portal and verify the signature of the JWT contained in web request

That’s easy, simple and neat. Right?

Next time, we will have a look at Azure AD B2C configuration to authenticate users, which requires more configurations and adds a little bit of complexity.

Accessing Dynamics 365 WebAPI as an application user without using ADAL

Published / by AK / Leave a Comment

Microsoft has recently posted the documentation for using Postman with Dynamics 365 WebAPI at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/webapi/use-postman-web-api. This prompts you to login into the application using your credentials to generate a bearer token which is passed to Dynamics 365 WebAPI when a request is made.

However, we, sometimes, need to use an application user to access WebAPI, either for integration testing purpose or for implementing automations. Microsoft has documented server-to-server (S2S) authentication using Azure Active Directory Authentication Libraries (ADAL) at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/walkthrough-multi-tenant-server-server-authentication.

Although I am a developer, I strongly believe we can (should) write a program only when a process can be manually done. Therefore, tools like Fiddler, SoapUI, Postman are essential tools to me when it comes to testing WebAPIs.

Now, I will show you how we can generate an access token using an application user in Postman, assuming you have already created an application user. If you need detail steps to create an application user, please follow instructions at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/walkthrough-multi-tenant-server-server-authentication.

In Postman, let’s create a few Postman environment variables.

  • TenantId – to store GUID of Azure Directory ID
  • D365Url – to store the URL of Dynamics 365 instance
  • D365AppId – to store Azure Application Id
  • D365AppSecret – to store Application secret
  • D365Bearer Token – to store generated access token

Environment variables

Environment variables

Then, let’s create a new POST request to https://login.microsoftonline.com/{{TenantId}}/oauth2/token with following data.

Token request

Token request

Now, send a request and it should return an access token (assuming an application user has been correctly setup).

Access Token

Access Token

Now, we need to store the access token to our Postman environment variable called D365BearerToken so that we can re-use it accessing Dynamics 365 WebAPIs. To do this, we need to write a simple script under Tests area.

Writing tests

Writing tests

Pretty easy, right? Now, let’s create another GET request to retrieve the version number of Dynamics 365 instance. The request’s authorization type should be set to ‘Bearer Token’ and its token value should refer to our Postman environment variable D365BearerToken.

Retrieve version

Retrieve version

Now, you can test any WebAPI calls using an application user without relying on ADAL.

What’s next?

Using the same approach, you are not limited to Dynamics 365 ODataQuery in Flow/LogicApps. You can now call any Dynamics 365 Messages using the same approach and perform various tasks.

Creating a custom connector to upload a file to Google Drive from PowerApps – Part 2

Published / by AK / 1 Comment on Creating a custom connector to upload a file to Google Drive from PowerApps – Part 2

Part 1 can be found here.

Swagger time

To create a customer connector, you can upload either a swagger file or a Postman collection. I am not a swagger expert. But, apistudio makes my life easier. Let’s jump onto http://specgen.apistudio.io.

Paste the URL that you used in Postman. Make sure you are using POST. You should receive 202 status after sending the request. One more thing to remember is to click ‘Next Step‘ button instead of clicking tabs. Otherwise, you will need to rework.

Generate swagger

Generate swagger

Enter API Program and contact info. Ensure to use valid Contact Url and Contact Email. What’s next? Next Step button.

Swagger contact info

Swagger contact info

Next is API Info. You can play around with the slider which will change API Base Path and API Path. Give a meaningful OperationId.

API Info

API Info

There is nothing special to do with Headers Info and Params Info, although we will make some manual changes in them. I don’t see (and I don’t know) to make those changes in the generator.

Keep clicking Next Step until you get Open API Spec. There you can download the swagger file.

Open API Spec

Open API Spec

Now, let’s make a few changes to swagger file.

First, we need to replace null with application/json in “produces” collection, and remove the whole section for “content-type”

Remove content-type

Remove content-type

Modify produces

Modify produces

Set default values for api-versionspsv and sig. You can find these values in URL or using Postman.

Set default values

Set default values

Lastly, you need to add parameters for file content, which will be reading from formData as a file type.

Adding file params

Adding file params

That’s pretty much you need to do in Swagger file before creating a custom connector in PowerApps.

Creating a custom connector

Launch PowerApps in the browser and go to Custom connectors, and import your swagger file.

Create a new custom connector

Create a new custom connector

Create custom connector

Create custom connector

Generally, you don’t need to change anything until you get to 3. Definition. At 2. Security, you can still use ‘No authentication’ as the API will authenticate using sig parameter you have provided in the query string. As long as you don’t expose signature parameter, no authorised user can use your API.

At step 3, scroll to Request section where you will see query parameters. Edit api-versionspsv and sig, and change their visibility to internal since we have set default values in swagger file.

Modify visibility

Modify visibility

After changing visibility of api-versionspsv and sig, you can now create a custom connector.

Once it is created, it is time to head into PowerApps. In your PowerApps, while adding a new data source, you can now find your custom connector.

Custom connector in PowerApps

Custom connector in PowerApps

After creating a data source, let’s add some controls to PowerApps and check whether it’s working. I have added PenInput and Button as below. OnSelect of Button, I am going to push the image from PenInput to my Google Drive.

Sample PowerApps

Sample PowerApps

Viola. It is now uploaded to Google Drive.

PenInput to Google Drive

PenInput to Google Drive

PenInput to Google Drive

PowerApps does not work

You know it right? Save, close and open your PowerApps again.

One last thing

This is only an example of how we can easily (of course, I can know say ‘it is easy’) transform any Flow functionalities as custom connectors. This also solves the concern of using Camera control which produces a low resolution photo. With this approach, we can use AddPicture control not only to take full advantage of device camera but also to pick up photos/files which are already existed in the device.

Generally, we don’t need custom connectors if parameter types are simple. In this example, we need Base64. I guess, depending on the parameter type of an endpoint, PowerApps may be passing suitable parameters.

Anyway, we can now invoke all those infinite possible connectors of Flow from PowerApps.

 

Multiple user roles and entity permissions in Microsoft CRM Portal

Published / by AK / Leave a Comment

Before reading the post, you may would like to read the official document on Microsoft at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/portals/assign-entity-permissions. This will give you a basic understanding of entity permissions in Microsoft CRM Portal.

Your customer wants to build a portal using Microsoft CRM portal. Connection is used for linking their contacts and accounts. Your customer wants to control the permissions of their contacts (who will login to the portal) on their related accounts, based on their connection role. If a contact is assigned as Admin role, he should be able to edit the account record. But, if it is User role, he should have read-only access to Account record.

Using connection to assign role to accounts

Sounds like a familiar request from most customers?

If yes, let’s look at how we can implement this scenario in Microsoft CRM portal.

Continue reading

Microsoft CRM Portal (online) development essentials

Published / by AK / Leave a Comment

After developing a custom portal on Microsoft CRM Portals, I like to share my experience.
This is kind of starter guide to build highly customised web pages on Microsoft CRM portals. It is a high level and very informal guide.

Libraries and frameworks

To customise the portal, a good understanding of followings are essential.

Bootstrap

Microsoft CRM portal uses Bootstrap. Use bootstrap customiser like https://www.bootstrap-live-customizer.com/ for a quick customisation.

Continue reading

Default tab (timeline) of Activity Wall in CRM Portal

Published / by AK / Leave a Comment

Configuration and customization of CRM portal is always fun. You need to pay attention to even the smallest thing as there is no one stop place to configure them. Everything is connected among CRM form, portal entity form, metadata, permissions inheritance, scripts. One mis-configuration would render the form incorrectly.

One day, we found that some portal forms are rendering Timeline area like below where it is supposed to be Notes area.

Continue reading

Client-side library for CRM Portal – Completion of first draft

Published / by AK / Leave a Comment

Before end of 2016, I started working on Microsoft CRM Portal development. I realised I have to heavily use liquid and JavaScript if I want to customise. It is true that we can use jQuery to manipulate its elements but I never want to manipulate DOMs especially in this type of application whose UI would change over the time. Coming from CRM development, it will be completely understandable. Continue reading