React – Extended boundary of PowerApps Component Framework

Published / by AK / Leave a Comment

In previous post I demonstrated about how to use React with the former version of PowerApps Component Framework (PCF) by modifying PCF packages, which is unsupported.

I am a strong believer that plain old JavaScript is a not a good choice when it comes to building moderately complex component that use DataSets, and as such, since day one, I have been looking forward to Microsoft officially supporting React in PowerApps Component Framework.

Recently, the Microsoft PCF team finally announced their official support of React with PCF and released an example on how to achieve it. To my surprise, the approach I described in my previous blog post was conceptually similar to theirs.

I personally feel Microsoft’s example is too complex to understand to someone who is not familiar with React.

In this post I am going to explain how to use React in PowerApps Component Framework, through a step-by-step explanation, using a very basic component, a text box. This blog post is intended to those developers who are not familiar with React.

React 

React is a JavaScript library for building user interfaces. You can learn about it at https://reactjs.org/docs/getting-started.html

Principle

Skipping the basic steps, below are the 3 React principles you need to be familiar with

  1. Components – Components let you split the UI into independent, reusable elements, and force you to think about each element in isolation. Conceptually, components are like JavaScript functions. They accept arbitrary inputs (called “props”) and return React elements describing what should appear on the screen.
  2. Props – Props are immutable and therefore read-only. A component must never modify its own props.
  3. State – A state can be local or encapsulated. It is not accessible to any component other than the one that owns and sets it.

For example, you can create a component and assign it some props by the caller. The component will then manage its own state.

React is designed for “top-down” or “unidirectional” data flow.

A picture is worth a thousand words

Before we start, I would like to present the following diagram. It summarises the concept of using React in PCF.

PCF component will pass the Input value and a custom function (in this example, it is notifyChange) to React component via Props. In React component, we will copy Props into State in componentWillReceiveProps. Now, we can make changes to the State value in React component. When we decide to send the value from PCF, we will call the function notifyChange which will then signal its new value to PCF.

Based on this diagram, we are going to build a simple text box using React.

Build a simple text box without React

You can follow Microsoft documentation to create a PCF project. Once you open it in Visual Studio Code, let’s create a sample text box using TypeScript. You can find the sample code for this step here.

Principle #1 and creating a React component

To create a React component, let’s create a new file called ReactSampleTextBox.tsx, and add the code from here. In render() function, we are creating an input text.

Now, go back to index.ts and render the ReactSampleTextBox component. Code can be found here.

We will start to see the benefit of using React. Your index.ts becomes much cleaner. You will see an input text from React component when you run the component. Please note we are here to create a simple React component without passing any data from the PCF component.

Principle #2 and passing data

In PCF, this principle is about passing data from PCF to the custom React component. However, the principle states that we must use Props to pass the data to our custom React component and that it must remain read-only. But, since we have States, we use componentWillReceiveProps to set the value of Props to State.

Now, it’s time to modify index.ts and ReactSampleTextBox.tsx for passing data.

First, in ReactSampleTextBox.tsx, we need to declare an Interface for props. Let’s call it IProps and it has a property called value. We will use this value property to pass data from PCF to React component. Your ReactSampleTextBox.tsx will look like following. The value property of the input text is bound to props value of React component

In index.ts, declare an object props whose type is IProps from ReactSampleTextBox, assign the value in updateView and pass it to React component on rendering.

When you test the control, you will see the input text in React component will show any value you change via Inputs panel. However, you will NOT be able to directly edit any value in React text box. This is because we don’t tell the input text what to do. Let’s modify ReactSampleTextBox.tsx as following. Here, we create handleChange function, bind it in constructor and bind it onChange event of the input text.

First, let’s create IState interface. And copy the props value into the state in both constructor and componentWillReceiveProps. Next, we create handleChange function and bind it to the input text. Now, if you run the component, you will be able to pass value to React component from Inputs panel and you will be able to change value in text box.

You will notice any value change in React component text box will NOT show in Outputs panel.

Principle #3 and its problem

In the PCF example the third principle is about passing data from custom React component back to PCF. Since React is “unidirectional” how do we pass data from our React component back to PCF?

When using React for pages rendering purposes, a unidirectional data flow is not a problem. We tend to send data to the server-side using HTML Forms and re-render the page.

This could be troublesome with PCF as we have to use notifyOutputChanged in the init method to notify PowerApps Forms that the value has been changed in the component.

Getting around with the third principle issue

To get around this issue, we need to create a notifyChange function, which will update the output value of PCF and call the notifyOutputChanged in our PCF component.

We can pass the notifyChange function to our custom React component as part of props. Whenever a value is updated in our React component, the notifyChange function is executed, which will subsequently call notifyOutputChanged to update the output value and subsequently trigger onChange event on PowerApps Form.

Let’s make small changes to ReactSampleTextBox.tsx by adding onChange property in IProps. Now, when handleChange is executed, we will execute onChange function from props.

Now, back to index.ts. Create a new function called notifyChange which will then call notifyOutputChange.

The last thing to do is to pass notifyChange from index.ts to ReactSampleTextBox as props.

Run it to enjoy React component in PCF. The code can be found here.

I agree this type of component can be done without React. My intention, however, is to explain how values are passed between PCF and React component, and I believe building a very simple component is the best approach to explain the whole process.

If you are hard-core React developer, you might want to use redux to manage the state in a clean way. Field template component will unlikely require redux. However, if you are building a complex dataset component, not only for display but also for CRUD, redux is worth to check out.

Hopefully this post has been helpful to anyone beginning their adventure with React in PCF.

PowerApps Component Framework – Beyond the boundary

Published / by AK / 1 Comment on PowerApps Component Framework – Beyond the boundary

Note: I am going to write some dark magic with PCF. It is not currently supported by Microsoft. These are my personal findings. Views are my own. But if you want to dismantle PCF and want to test the boundary, then read further.

Credit: Andrew Ly for his Number Button Selector control and feedback. Rami Mounla for proofread and correction on the content.

The source code can be downloaded at my github.

Intro

Microsoft has announced public preview of PowerApps Component Framework (PCF) and its Command Line Interface (CLI) in April 2019. Many developers, including me, have been anticipating this new feature. I find the documentation useful. A few people has started building and sharing new controls. No disappointment from this beautiful community.

Absolute need

TypeScript. Although I am not absolutely convinced the preference of TypeScript to JavaScript (by the way, JavaScript is my favorite), it is the only choice we have right now. However, Andrew Ly has pointed out essential areas where TypeScript really shines, such as strong typing, OO concepts thus bringing the gap between JavaScript and OO languages like C#/Java.

Beyond absolute need

There is no doubt you can build any control following Microsoft documentation. But, I don’t want to stop here. I want to push its boundary and limits. So far, I recommend adding 3 more types of knowledge.

2. React for UI components

When the sample from Microsoft documentation shows building the UI using TypeScript, it reminds me of my student days when I had an assignment task asking to build a UI in C++. I could not force myself to create UI in C++ program.

TypeScript for building modern UI? No, please.

I admit I was super lazy during my student life. I only wanted to deliver bare minimum. So, UI in code file is a NO for me. However, as my career began and I collected some experience, I allegedly become a bit wiser. Yet, the wiser me supports the lazy me.

There are many libraries to choose. However, you can find react and react-dom packges under node_modules after executing “npm install” at step 4.. So, React is the choice. You can also use JSX which makes life easier when building DOMs.

2. NPM for packages

The motto of “Why reinvent the wheel?”. NPM hosts thousands of free packages. Most of the well-established libraries are now distributing over NPM. It is like NuGet for JavaScript. Now, you may ask “why not reference the CDN?”. Well, that brings me to next one.

3. Webpack for bundling

A module bundler. It will bundle all your resources, and packages from NPM, as a single package. Referring to CDN will only work when there is connectivity (or it is dependency). If we want to use it offline in mobile devices, we may encounter some issues.

The good news is that you don’t need to worry about NPM and webpack. PCF CLI handles them gracefully in most cases. I did however notice that there are some issues with adding large NPM packages (>500kb). The bundling tends to behave weirdly.

Create a new PCF Project

You use “pac pcf init” command to create a new PCF project. It will pre-create ControlManifest.Input.xml and index.ts.

pac pcf init
pac pcf init

Create a react component

Let’s create DemoComponent.tsx and write some scripts. The sample source can be found at my github repository.

Project structure
Project structure

Run ‘npm run build’ command. You will get an error saying ‘–jsx’ is not set. To resolve this, you need to add “jsx”: “react” in tsconfig.json located under your root folder. After this, you will be able to run ‘npm run build’ command without any issue.

JSX error
JSX error
Update tsconfig.json
Updating tsconfig.json

Now, let’s render DemoControl in index.ts. You will see an error and you will not be able to run ‘npm run build’ command.

Render react component

Now, let’s change index.ts to index.tsx and you will see the error is gone from IDE. You also need to update the code file extension in ControlManifest.Input.xml.

But, when you run ‘npm run build’, you run into another issue saying ‘Code file needs to be a typescript (.ts)’.

Looking at the error, it comes from pcf-scripts module which is located under node_modules folder in your PCF project.

tsx error during build
tsx error

A quick search using the error message brings me to line 84 at controlcontext.js under pcf-scripts module.

Include tsx
Only allow .ts

You simply need to update ‘if’ statement to allow .tsx extension and save it. And Magic happens.

Include tsx
Add .tsx into file type

Now, if you run ‘npm run build’, it will succeed. Next, it’s time to execute ‘npm start’. When the browser is launched, you will see the error ‘React’ is not defined.

React error
React error

To address this error, open index.html under your pcf root folder\node_modules\pcf-start and add following two script tags.

<script src="https://unpkg.com/react@16/umd/react.development.js" crossorigin></script>
<script src="https://unpkg.com/react-dom@16/umd/react-dom.development.js" crossorigin></script>

As soon as you save your changes, your react component will be rendered.

React control demo

Congratulations! You have created a custom control using PCF and React.

The proceder in this article is experimental only.

Final word

Even though you can deploy React control to model-driven app, this approach is not currently supported by Microsoft. I am sharing this only for experimental purposes and showing how PCF can be extended using modern libraries. I am not a pro-React developer nor a TypeScript developer, therefore, I may have approached this the wrong way. If you have any comments, please share them below.

Six months after deploying my first PowerApps

Published / by AK / Leave a Comment

This post is more of rant than evangelist.

In October 2018, I had involuntarily involved in developing PowerApps for one of our clients at work. “Why involuntarily?” you may ask. Well, although I mainly work with Microsoft platform, I play around with iOS and Android native app developments and help out my friends. So, I fairly know the biggest challenge with mobile app development, making your app to work across different devices.

PowerApps is great. I have no doubt about it. All canvas-apps run inside PowerApps native app on your mobile, thus making it very different from cross-platform development in mobile app. It also means you have no control over the release of PowerApps native app.

Regardless of my strong objection, we managed to deliver the app in 10 days. The purpose of the app is to transform their paperwork of auditing farms. Basically, admin will create audit records in D365 and assign to auditors, then auditors will download records to their apps for offline use, drive out to remote areas for auditing (which also captures photo and signature). When auditors arrive back to their home at night, they sync their offline data to D365 and get newly assigned audits. Photos need to upload to their Google Drive. Why not? Microsoft Flow has out-of-the-box connector to Google Drive. Easy peasy.

Then, the client wanted to add a new functionality which will share photos uploaded in Google Drive to the audited farm. Although you can share files/folders in Google Drive, out-of-the-box connector is not capable of doing it. After going through Google Drive REST APIs, we solved it by creating a custom connector. To deliver all these functions in 10 days was impressive. It was working fine. Well … for two months.

In December, the client told us some of their auditors were facing issues with syncing records. We enhanced our app for a better performance. During the enhancement, it was all fine. When the app was re-opened again, it showed over 200 errors on one of global variables in the app. Suddenly! Set and Patch function on record type global variable didn’t work anymore. (You can find the similar issue here, though I didn’t report it).

I ended up with re-creating an app almost from scratch again. Tweak and fix. Again and again. One of the challenges with web player is you can’t test offline mode. SaveData and LoadData doesn’t work. I hope Microsoft will at least create stubs for these two functions for web player in the future. Having to test it on the device, you have to publish the app, and open the app at least twice in the device to see the latest version. It is not productive at all, but we managed to fix all issues.

But, there was another problem. Microsoft has released the new version of PowerApps and unfortunately it was a bad version. Without noticing the bad version, I kept working on the app. After publishing to test it on my device, it was too late. There was no way to return. If I rolled back to the working version, all my changes will be lost. If I stick to the current version, it can’t be tested. It didn’t look good at all.

Luckily, Microsoft released the bug fix over the weekend and the original issue was gone. But, again, the app was starting to behave weird. Functionalities used to work were not working anymore. For example, if I rolled back to the old working version, it worked. But, not on the latest PowerApps version. It is so frustrating for a developer. You have no control over which PowerApps runtime you want to choose. Microsoft dictates it. Because of it, one bad runtime version can create many issues even if you don’t make any changes.

During the enhancement, we started to notice errors in listing Google Drive files in Microsoft Flow. After searching Google Drive REST APIs, it turns out Google limits items returned from API to 100 by default and it can be changed. However, there is no way to define the paging size in out-of-the-box connector. We didn’t have the issue in UAT and early days of production go-live since items were less than 100. But, now we have an issue. As usual, a custom connector saved our life.

All issues seem to be sorted out, at least, for now. Hopefully, the app will behave nicely for next two months. But, I learned following lessons.

  • PowerApps is easy but not necessarily simple – a lot of gotcha and similar functions with totally different behaviours.
  • A lack of ability to control PowerApps version – one bad version will break your app into pieces just by publishing without making any changes. If it happens in development phase, it will stale the timeline.
  • Out-of-the-box connectors are useful for demo but not reliable for production – always reference to the source API documentation, not Microsoft Flow documentation.
  • Community is great but not all issues are resolved – looking at the example with Set and Patch functions in above link, it took about 2 months to recognise the issue by Microsoft PowerApps team. The worst part is it is not resolved yet.

Channel Integration Framework with Twilio – Part 1

Published / by AK / Leave a Comment

I have posted the video of incoming call using CIF here. It is a live working demo using Twilio trial account. I am going to share my experience of making this so anyone can easily setup the basic features and start exploring more awesomeness.

I have tested the public preview of Microsoft Dynamics 365 Channel Integration Framework. It was before Christmas and New Years holidays. Moving house over the holiday period kept me occupied for weeks.

Last week, I started to look into CIF again. During the public preview, I had issues with incoming and outgoing calls. I was able to connect to Twilio service. Outgoing/incoming calls that return pre-defined messages were successful. However, I couldn’t manage to figure it out to make a call to a real number.

In this post, I am going to cover D365 users responding to incoming calls directly from web interface – without leaving D365 at all.

Preparation

Before we start, please go through followings

Now, we can begin!

Setting up

You will find steps provided in Microsoft documentations are easy to follow. But I find problems with the section to create Twilio functions (https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/channel-integration-framework/sample-softphone-integration#create-function-to-use-with-the-app-service).

According to Readme included in the sample code, you have to create functions in Twilio. I followed the instructions and couldn’t manage to make it work. So, I started with creating a basic client and later add a couple of controllers to the sample code to setup a twilio client.

You can follow tutorials from https://www.twilio.com/docs/voice/client/tutorials/how-to-set-up-a-server-for-twilio-client to create a basic client including TokenController and VoiceController. Creating own controllers give us the whole capability of Web, flexibility and fine grain control of call processes, such as limiting the functionality, dynamic routing which could be stored somewhere else.

In Twilio console, it is very easy to get confused and lost in navigation. However, we can get the information we need from
– Twilio account dashboard at https://www.twilio.com/console
– TwiML apps at https://www.twilio.com/console/voice/twiml/apps
– Phone numbers at https://www.twilio.com/console/phone-numbers/incoming

In short, you need to

Once you have deployed the sample code, you need to configure your trial Twilio Phone Number. First, let’s create a TwiML to route and store it in TwiML Bin. You can find TwiML bin under Runtime or at https://www.twilio.com/console/runtime/twiml-bins/. For testing purpose, let’s create following TwiML.

TwiML
TwiML

You may ask “What the heck ‘ak’ is sitting between Client tag?” Well, it directly relates to the client name of token generated from TokenController.cs

TokenController
TokenController

Once TwiML is saved, go to your phone number and configure its A CALL COMES IN setting by choosing TwiML and choosing the newly created TwiML from Bins.

Phone Number Configuration
Phone Number Configuration

That’s all you need to configure to receive and answer the call using CIF. It takes a while to get it right. Reading Twilio documentations will help you a lot.

Enhancements

You would notice that I use a static TwiML to route to the client in Phone Number configuration. In real world, I think Webhook is a better approach where you can return TwiML dynamically according to implementation of business rules. In addition, TokenController.cs could do similar thing which generates dynamic token for each login user. That way, the call could be routed to the correct client.

Need help?

Reach out to me if you need any help and it is confusing.

Read Office 365 Message Centre using Office 365 Management API

Published / by AK / Leave a Comment

I started my career as a web developer. Since then it is one of the best decision I have ever taken. Web has been evolving and because of it, low code platforms like Microsoft Flow makes our life a lot easier. In this post, I am going to show you how to read messages from Office 365 Message Centre using its API.

Office 365 Management API is well documented at https://docs.microsoft.com/en-us/office/office-365-management-api/office-365-management-apis-overview

Since it is web api, Microsoft Flow can easily make HTTP request and parse the response. To do it, let’s register an app in Azure Portal first. As we are going to use it from Microsoft Flow, you need to create Web App / Api type application. Another point is to grant the correct permission to the app. Its permission should be set as below.

Permissions to read messages

Permissions to read messages

Ensure you click Grant Permissions after choosing correct permissions. Otherwise, the authroisation will fail.

After this, the rest is quite straight forward. I have created the following Flow. In general, it will

  1. read the last message time from my Google sheet
  2. check the value of last message time. It sets the correct StartTime condition in HTTP step so we won’t be retrieving same messages over and over again.
  3. parse the data and insert into my Google sheet

Message Centre Flow

Message Centre Flow

HTTP request step is setup as below. It is quite straightforward. Please note we need to filter StartTime properly.

Office 365 Management API HTTP Request

Office 365 Management API HTTP Request

Once the Flow is executed, all messages are nicely inserted into my Google sheet.

Messages in Google Sheet

Messages in Google Sheet

This is all because of the power of Web. Of course, Microsoft Flow makes things easier. That’s all for now.

Passing current login user of Microsoft Dynamics CRM Portal to external web app

Published / by AK / Leave a Comment

You can get the detail of current login user in liquid template va user liquid object. There is another way you can get the current login user via XHR call.

Microsoft CRM Portal has built-in API to generate JWT of current login user. The API is at https://<crm portal url>/_services/auth/token and returns JWT. This JWT is nothing but a JSON object encrypted using RS256 algorithm. So, anyone can decode it. Other words, anyone can encode it also.

You sometimes need to pass the current login user information to external web app. Since it takes very little effort to generate a JWT and pass it to your external website, it is very easy to bypass the security. Therefore, you will definitely want to verify the authenticity of generated token too ensure the token is generated from trusted source (in this case, your CRM portal).

The beauty with JWT is you can verify the signature of token using public key. If you are not familiar with PKI, the process generally involves the source or CRM portal which generates a token using its private key (which is already handled in CRM portal), and the target or your external web app which verifies the authenticity of the token using public key. To do this, get  the public key of your CRM portal at https://<crm portal url>/_services/auth/publickey.

The order of the whole process is

  1. Pass JWT token as a parameter in a web request/link to your external web app
  2. In your external web app, get public key from CRM portal and verify the signature of the JWT contained in web request

That’s easy, simple and neat. Right?

Next time, we will have a look at Azure AD B2C configuration to authenticate users, which requires more configurations and adds a little bit of complexity.

Accessing Dynamics 365 WebAPI as an application user without using ADAL

Published / by AK / Leave a Comment

Microsoft has recently posted the documentation for using Postman with Dynamics 365 WebAPI at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/webapi/use-postman-web-api. This prompts you to login into the application using your credentials to generate a bearer token which is passed to Dynamics 365 WebAPI when a request is made.

However, we, sometimes, need to use an application user to access WebAPI, either for integration testing purpose or for implementing automations. Microsoft has documented server-to-server (S2S) authentication using Azure Active Directory Authentication Libraries (ADAL) at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/walkthrough-multi-tenant-server-server-authentication.

Although I am a developer, I strongly believe we can (should) write a program only when a process can be manually done. Therefore, tools like Fiddler, SoapUI, Postman are essential tools to me when it comes to testing WebAPIs.

Now, I will show you how we can generate an access token using an application user in Postman, assuming you have already created an application user. If you need detail steps to create an application user, please follow instructions at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/walkthrough-multi-tenant-server-server-authentication.

In Postman, let’s create a few Postman environment variables.

  • TenantId – to store GUID of Azure Directory ID
  • D365Url – to store the URL of Dynamics 365 instance
  • D365AppId – to store Azure Application Id
  • D365AppSecret – to store Application secret
  • D365Bearer Token – to store generated access token

Environment variables

Environment variables

Then, let’s create a new POST request to https://login.microsoftonline.com/{{TenantId}}/oauth2/token with following data.

Token request

Token request

Now, send a request and it should return an access token (assuming an application user has been correctly setup).

Access Token

Access Token

Now, we need to store the access token to our Postman environment variable called D365BearerToken so that we can re-use it accessing Dynamics 365 WebAPIs. To do this, we need to write a simple script under Tests area.

Writing tests

Writing tests

Pretty easy, right? Now, let’s create another GET request to retrieve the version number of Dynamics 365 instance. The request’s authorization type should be set to ‘Bearer Token’ and its token value should refer to our Postman environment variable D365BearerToken.

Retrieve version

Retrieve version

Now, you can test any WebAPI calls using an application user without relying on ADAL.

What’s next?

Using the same approach, you are not limited to Dynamics 365 ODataQuery in Flow/LogicApps. You can now call any Dynamics 365 Messages using the same approach and perform various tasks.

Creating a custom connector to upload a file to Google Drive from PowerApps – Part 2

Published / by AK / 1 Comment on Creating a custom connector to upload a file to Google Drive from PowerApps – Part 2

Part 1 can be found here.

Swagger time

To create a customer connector, you can upload either a swagger file or a Postman collection. I am not a swagger expert. But, apistudio makes my life easier. Let’s jump onto http://specgen.apistudio.io.

Paste the URL that you used in Postman. Make sure you are using POST. You should receive 202 status after sending the request. One more thing to remember is to click ‘Next Step‘ button instead of clicking tabs. Otherwise, you will need to rework.

Generate swagger

Generate swagger

Enter API Program and contact info. Ensure to use valid Contact Url and Contact Email. What’s next? Next Step button.

Swagger contact info

Swagger contact info

Next is API Info. You can play around with the slider which will change API Base Path and API Path. Give a meaningful OperationId.

API Info

API Info

There is nothing special to do with Headers Info and Params Info, although we will make some manual changes in them. I don’t see (and I don’t know) to make those changes in the generator.

Keep clicking Next Step until you get Open API Spec. There you can download the swagger file.

Open API Spec

Open API Spec

Now, let’s make a few changes to swagger file.

First, we need to replace null with application/json in “produces” collection, and remove the whole section for “content-type”

Remove content-type

Remove content-type

Modify produces

Modify produces

Set default values for api-versionspsv and sig. You can find these values in URL or using Postman.

Set default values

Set default values

Lastly, you need to add parameters for file content, which will be reading from formData as a file type.

Adding file params

Adding file params

That’s pretty much you need to do in Swagger file before creating a custom connector in PowerApps.

Creating a custom connector

Launch PowerApps in the browser and go to Custom connectors, and import your swagger file.

Create a new custom connector

Create a new custom connector

Create custom connector

Create custom connector

Generally, you don’t need to change anything until you get to 3. Definition. At 2. Security, you can still use ‘No authentication’ as the API will authenticate using sig parameter you have provided in the query string. As long as you don’t expose signature parameter, no authorised user can use your API.

At step 3, scroll to Request section where you will see query parameters. Edit api-versionspsv and sig, and change their visibility to internal since we have set default values in swagger file.

Modify visibility

Modify visibility

After changing visibility of api-versionspsv and sig, you can now create a custom connector.

Once it is created, it is time to head into PowerApps. In your PowerApps, while adding a new data source, you can now find your custom connector.

Custom connector in PowerApps

Custom connector in PowerApps

After creating a data source, let’s add some controls to PowerApps and check whether it’s working. I have added PenInput and Button as below. OnSelect of Button, I am going to push the image from PenInput to my Google Drive.

Sample PowerApps

Sample PowerApps

Viola. It is now uploaded to Google Drive.

PenInput to Google Drive

PenInput to Google Drive

PenInput to Google Drive

PowerApps does not work

You know it right? Save, close and open your PowerApps again.

One last thing

This is only an example of how we can easily (of course, I can know say ‘it is easy’) transform any Flow functionalities as custom connectors. This also solves the concern of using Camera control which produces a low resolution photo. With this approach, we can use AddPicture control not only to take full advantage of device camera but also to pick up photos/files which are already existed in the device.

Generally, we don’t need custom connectors if parameter types are simple. In this example, we need Base64. I guess, depending on the parameter type of an endpoint, PowerApps may be passing suitable parameters.

Anyway, we can now invoke all those infinite possible connectors of Flow from PowerApps.

 

Creating a custom connector to upload a file to Google Drive from PowerApps – Part 1

Published / by AK / 1 Comment on Creating a custom connector to upload a file to Google Drive from PowerApps – Part 1

Knowing the limitation

Although Microsoft Flow and PowerApps support ready-to-use connectors for different services, it seems they do not have the same level of offerings between them.

In Flow, for instance, there is a connector for Google Drive and there is a bunch of actions like creating/deleting a file, browsing the folder.

Google Drive Actions in Flow

Google Drive Actions in Flow

However, you cannot use those actions in PowerApps. Apparently, Google Drive connector in PowerApps only works for excel as a datasource.

Google Drive in PowerApps

Google Drive in PowerApps

Additionally, if you pay attention to different image controls, such as PenInput, Camera, Add Media, they have different properties which looks like they are doing the same thing. For example, Camera.Photo returns a Data URI of a picture, while PenInput.Image and AddMediaButton.Media effectively returns a local blob storage URL (appres://blobmanager/<GUID>/<serial number>) which can only be accessed in the app.

Combining with the inability of using full functionality of connectors and meaningless local blob storage URL outside the app, we need look at what else we can do.

Knowing the out-of-box capability

Given that Camera.Photo returns a Data URI string of an image, we can convert it into binary and save it into Google Drive. However, it has another small issue. Camera control in PowerApps does not use the full resolution of device camera. It is ridiculously small (at least to me). If even we can live with small photos, we still need a workaround for other controls.

Out-of-the-box, you can upload images from above control to Azure Blob Storage. Then, from Azure Blob Storage, you can copy the file to the destination you want. This is not a problem for a client who has already had Azure subscription, but not a neat solution for a client who has no prior Azure subscription. Trust me the cost is never a problem (Azure Blob Storage is super cheap) but it’s more about managing different instances. If managing different instances is not an issue, this is a way to go.

Going an extra mile

This is where a real fun starts.

But, we always want to go an extra mile to see what else there. In the bright side, PowerApps allows us to create a custom connector which can be configured in Flow to upload a file to Google Drive.

First, let’s start with creating a flow which will be triggered when HTTP request is received. There is only one trigger available. This is basically a webhook.

HTTP Request Trigger

HTTP Request Trigger

HTTP Request Trigger

HTTP Request Trigger

Then, initialise the ‘FileName’ variable, so we can save the flow. The expression for value is trigger()[‘outputs’][‘queries’][‘filename’] as we are planning to pass query string filename during the request.

Initialise FileName variable

Initialise FileName variable

After saving the flow, expand the HTTP trigger point to grab the URL to invoke

URL to invoke

URL to invoke

Copy that URL to Postman, and append filename query string in URL. Please note sig query string should be kept in secret.

Test from Postman

Test from Postman

After invoking the URL, you will see the Flow is executed. When you open the execute Flow instance, you will see the filename query string is assigned to FileName variable in Flow.

Test Result

Test Result

Now, let’s try sending the file from Postman and let’s look at the Flow to extract file content. In your Postman, choose File form-data and send the request.

Send the file from Postman

Send the file from Postman

Open the executed flow. This time, you will see ‘click to download’ in the output of HTTP request.

Test Result with file

Test Result with file

When you open it, you will see a massive output, but we are only after for a few things which are body and $content. $content is Base64 of uploaded file.

Multipart body content

Multipart body content

To get this content, add another variable initialiser. This time we are going to set its type as Object, as Google Drive accepts JSON.

Prepare file content

Prepare file content

Now, let’s add another action to upload a file to Google Drive. It will ask you to connect to Google if you have no prior connection.

Google Drive connector

Google Drive connector

Now, let’s choose the folder path in Google Drive, file name and its content. You will need to write an expression for file content to get the object of FileContent variable.

Google Create File action

Google Create File action

Save the Flow. Go back to Postman, and send the request. You will see the flow is executed and a file is uploaded to your Google Drive.

I will cover the process of converting this Flow into a custom connector and calling it directly from PowerApps, in Part 2.

Multiple user roles and entity permissions in Microsoft CRM Portal

Published / by AK / Leave a Comment

Before reading the post, you may would like to read the official document on Microsoft at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/portals/assign-entity-permissions. This will give you a basic understanding of entity permissions in Microsoft CRM Portal.

Your customer wants to build a portal using Microsoft CRM portal. Connection is used for linking their contacts and accounts. Your customer wants to control the permissions of their contacts (who will login to the portal) on their related accounts, based on their connection role. If a contact is assigned as Admin role, he should be able to edit the account record. But, if it is User role, he should have read-only access to Account record.

Using connection to assign role to accounts

Sounds like a familiar request from most customers?

If yes, let’s look at how we can implement this scenario in Microsoft CRM portal.

Continue reading