If you upgraded AKS recently (to v1.23+), you might have noticed some containers stopped working, most of the time failing to start with messages like “Error: failed to start containerd task “solr”: hcs::System::CreateProcess solr: The system cannot find the file specified.: unknown“.
As we spent quite some time researching those issues and also contacted Sitecore support, I’ve decided to write this post so it can be helpful to anyone else facing the same kind of issues.
Even if Docker runtime is still available in v1.23, it comes with Containerd by default so you will get that kind of exception. But bare in mind Docker is going to be fully removed on v1.24 so I suggest you take action as soon as possible to avoid blocking upgrading or facing issues later.
About Sitecore default images
In case you were making use of Sitecore images =< v10.0.2 then you will find those failing to start, mostly on the solr-init and mssql-init containers.
Sitecore has fixed the images for the following versions:
10.0.3
10.1.2
10.1.3
10.2.0
10.2.1
Please notice that if you are referring to the image versions using the “two-digit” tag, then you’re good to go, as would be getting its latest version.
About the runtime deprecation
AKS announced the deprecation of Docker in version 1.20:
Dependency on Docker explained
A container runtime is software that can execute the containers that make up a Kubernetes pod. Kubernetes is responsible for orchestration and scheduling of Pods; on each node, the kubelet uses the container runtime interface as an abstraction so that you can use any compatible container runtime.
In its earliest releases, Kubernetes offered compatibility with one container runtime: Docker. Later in the Kubernetes project’s history, cluster operators wanted to adopt additional container runtimes. The CRI was designed to allow this kind of flexibility – and the kubelet began supporting CRI. However, because Docker existed before the CRI specification was invented, the Kubernetes project created an adapter component, dockershim. The dockershim adapter allows the kubelet to interact with Docker as if Docker were a CRI compatible runtime.
Switching to Containerd as a container runtime eliminates the middleman. All the same, containers can be run by container runtimes like Containerd as before. But now, since containers schedule directly with the container runtime, they are not visible to Docker. So any Docker tooling or fancy UI you might have used before to check on these containers is no longer available.
You cannot get container information using docker ps or docker inspect commands. As you cannot list containers, you cannot get logs, stop containers, or execute something inside a container using docker exec.
Please refer to the official documentation for deeper details:
Ok, so now that things got a bit clear, and we know Sitecore base images are fixed in the latest versions (at least for v10), what about our custom ones?
So far, I’ve identified some changes required on our Dockerfile to make it work as expected in Containerd runtime.
ENTRYPOINT and CMD
The syntax is slightly different, I’ll share examples so it’s even easier to understand the changes.
I’ve come across the requirement for supporting a multi-site Sitecore-SXA approach with a single rendering host (Next.js app).
With this approach, we want to lower the costs by deploying to a single Vercel instance and making use of custom domains or sub-domains to resolve the sites.
If you have a look at the Sitecore Nextjs SDK and/or the starter templates, you’ll notice that there is no support for multi-site, so here I’ll go through a possible solution for this scenario where we need also to keep the SSG/ISR functionality from Next.js/Vercel.
The approach
To make it work we basically need to somehow resolve the site we’re trying to reach (from hostname or subdomain) and then pass it through the LayoutService and DictionaryService to resolve those properly.
As we’ve also enabled SSG, we’ll need to do some customization to the getStaticPaths so it generates the sitemap for each site.
Resolving the site by custom domains or subdomains
As I mentioned in the title of the post, I’ll be using Edge Middleware for that, so I’ve based this on the examples provided by Vercel, check the hostname-rewrites example!
For more details on Edge Middleware, please refer to my previous post!
Dynamic routes
Dynamic Routes are pages that allow you to add custom parameters to your URLs. So, we can then add the site name as a param to then pass it through the layout and dictionary services. For more details on dynamic routing, check the official documentation and the example here!
Demo!
We now know all the basics, let’s move forward and make the needed changes to make it work.
For demoing it, I’m just creating a new Sitecore Next.js JSS app by using the JSS initializer and the just recently released Sitecore Demo Portal! – Check this great blog from my friend Neil Killen for a deep overview of it!
Changes to the Next.js app
To accomplish this, as already mentioned, we have to play with dynamic routing, so we start by moving the [[…path]].tsh to a new folder structure under ‘pages’: pages/_sites/[site]/[[…path]].tsh
Then we’ve to create the middleware.ts file in the root of src. The code here is quite simple, we get the site name from the custom domain and then update the pathname with it to do an URL rewrite.
import { NextRequest, NextResponse } from 'next/server'
import { getHostnameDataOrDefault } from './lib/multisite/sites'
export const config = {
matcher: ['/', '/_sites/:path'],
}
export default async function middleware(req: NextRequest): Promise<NextResponse> {
const url = req.nextUrl.clone();
// Get hostname (e.g. vercel.com, test.vercel.app, etc.)
const hostname = req.headers.get('host');
// If localhost, assign the host value manually
// If prod, get the custom domain/subdomain value by removing the root URL
// (in the case of "test.vercel.app", "vercel.app" is the root URL)
const currentHost =
//process.env.NODE_ENV === 'production' &&
hostname?.replace(`.${process.env.ROOT_DOMAIN}`, '');
const data = await getHostnameDataOrDefault(currentHost?.toString());
// Prevent security issues – users should not be able to canonically access
// the pages/sites folder and its respective contents.
if (url.pathname.startsWith(`/_sites`)) {
url.pathname = `/404`
} else {
// rewrite to the current subdomain
url.pathname = `/_sites/${data?.subdomain}${data?.siteName}${url.pathname}`;
}
return NextResponse.rewrite(url);
}
You can see the imported function getHostnameDataOrDefault called there, so next, we add this to /lib/multisite/sites.ts
const hostnames = [
{
siteName: 'multisite_poc',
description: 'multisite_poc Site',
subdomain: '',
rootItemId: '{8F2703C1-5B70-58C6-927B-228A67DB7550}',
languages: [
'en'
],
customDomain: 'www.multisite_poc_global.localhost|next12-multisite-global.vercel.app',
// Default subdomain for Preview deployments and for local development
defaultForPreview: true,
},
{
siteName: 'multisite_poc_uk',
description: 'multisite_poc_uk Site',
subdomain: '',
rootItemId: '{AD81037E-93BE-4AAC-AB08-0269D96A2B49}',
languages: [
'en', 'en-GB'
],
customDomain: 'www.multisite_poc_uk.localhost|next12-multisite-uk.vercel.app',
},
]
// Returns the default site (Global)
const DEFAULT_HOST = hostnames.find((h) => h.defaultForPreview)
/**
* Returns the data of the hostname based on its subdomain or custom domain
* or the default host if there's no match.
*
* This method is used by middleware.ts
*/
export async function getHostnameDataOrDefault(
subdomainOrCustomDomain?: string
) {
if (!subdomainOrCustomDomain) return DEFAULT_HOST
// check if site is a custom domain or a subdomain
const customDomain = subdomainOrCustomDomain.includes('.')
// fetch data from mock database using the site value as the key
return (
hostnames.find((item) =>
customDomain
? item.customDomain.split('|').includes(subdomainOrCustomDomain)
: item.subdomain === subdomainOrCustomDomain
) ?? DEFAULT_HOST
)
}
/**
* Returns the site data by name
*/
export async function getSiteData(site?: string) {
return hostnames.find((item) => item.siteName === site);
}
/**
* Returns the paths for `getStaticPaths` based on the subdomain of every
* available hostname.
*/
export async function getSitesPaths() {
// get all sites
const subdomains = hostnames.filter((item) => item.siteName)
// build paths for each of the sites
return subdomains.map((item) => {
return { site: item.siteName, languages: item.languages, rootItemId: item.rootItemId }
})
}
export default hostnames
I’ve added the custom domains I’d like to use later to resolve the sites based on those. I’ve defined 2 as I want this to work both locally and then when deployed to Vercel.
Changes to the getStaticProps
We keep the code as it is in the[[…path]].tsx, you’d see that the site name is now part of the context.params (add some logging there to confirm this)
[[…path]].tsx
Changes to page-props-factory/normal-mode.ts
We need now to get the site name from the context parameters and send it back to the Layout and Dictionary services to set it out. I’ve also updated both dictionary-service-factory.ts and layout-service-factory constructors to accept the site name and set it up.
normal-mode.ts
fictionary-service-factory.ts
layout-service-factory.ts
Please note that the changes are quite simple, just sending the site name as a parameter to the factory constructors to set it up. For the dictionary, we are also setting the root item id.
Changes to getStaticPaths
We have now to modify that in order to build the sitemap for SSG taking all sites into account. The change is also quite simple:
// This function gets called at build and export time to determine
// pages for SSG ("paths", as tokenized array).
export const getStaticPaths: GetStaticPaths = async (context) => {
...
if (process.env.NODE_ENV !== 'development') {
// Note: Next.js runs export in production mode
const sites = (await getSitesPaths()) as unknown as Site[];
const pages = await sitemapFetcher.fetch(sites, context);
const paths = pages.map((page) => ({
params: { site: page.params.site, path: page.params.path },
locale: page.locale,
}));
return {
paths,
fallback: process.env.EXPORT_MODE ? false : 'blocking',
};
}
return {
paths: [],
fallback: 'blocking',
};
};
As you can see, we are modifying the fetcher and sending the site’s data as an array to it so it can process all of them. Please note the site param is now mandatory so needs to be returned in the paths data.
Custom StaticPath type
I’ve defined two new types I’ll be using here, StaticPathExt and Site
Site.ts
StaticPathExt.ts
We need to make some quick changes to the sitemap-fetcher-index.ts now, basically to send back to the plugin the Sites info array and to return the new StaticPathExt type.
import { GetStaticPathsContext } from 'next';
import * as plugins from 'temp/sitemap-fetcher-plugins';
import { StaticPathExt } from 'lib/type/StaticPathExt';
import Site from 'lib/type/Site';
export interface SitemapFetcherPlugin {
/**
* A function which will be called during page props generation
*/
exec(sites?: Site[], context?: GetStaticPathsContext): Promise<StaticPathExt[]>;
}
export class SitecoreSitemapFetcher {
/**
* Generates SitecoreSitemap for given mode (Export / Disconnected Export / SSG)
* @param {GetStaticPathsContext} context
*/
async fetch(sites: Site[], context?: GetStaticPathsContext): Promise<StaticPathExt[]> {
const pluginsList = Object.values(plugins) as SitemapFetcherPlugin[];
const pluginsResults = await Promise.all(
pluginsList.map((plugin) => plugin.exec(sites, context))
);
const results = pluginsResults.reduce((acc, cur) => [...acc, ...cur], []);
return results;
}
}
export const sitemapFetcher = new SitecoreSitemapFetcher();
And last, we update the graphql-sitemap-service.ts to fetch all sites and add its info to get returned back to the getStaticPaths
async exec(sites: Site[], _context?: GetStaticPathsContext): Promise<StaticPathExt[]> {
let paths = new Array<StaticPathExt>();
for (let i = 0; i < sites?.length; i++) {
const site = sites[i]?.site || config.jssAppName;
this._graphqlSitemapService.options.siteName = site;
this._graphqlSitemapService.options.rootItemId = sites[i].rootItemId;
if (process.env.EXPORT_MODE) {
// Disconnected Export mode
if (process.env.JSS_MODE !== 'disconnected') {
const p = (await this._graphqlSitemapService.fetchExportSitemap(
pkg.config.language
)) as StaticPathExt[];
paths = paths.concat(
p.map((page) => ({
params: { path: page.params.path, site: site },
locale: page.locale,
}))
);
}
}
const p = (await this._graphqlSitemapService.fetchSSGSitemap(
sites[i].languages || []
)) as StaticPathExt[];
paths = paths.concat(
p.map((page) => ({
params: { path: page.params.path, site: site },
locale: page.locale,
}))
);
}
return paths;
}
We’re all set up now! Let’s now create some sample sites to test it out. As I already mentioned, I’m not spinning up any Sitecore instance locally or Docker containers but just using the new Demo Portal, so I’ve created a demo project using the empty template (XM + Edge). This is really awesome, I haven’t had to spend time with this part.
Sitecore Demo Portal
I’ve my instance up and running, and it comes with SXA installed by default! Nice ;). So, I’ve just created two sites under the same tenant and added some simple components (from the JSS boilerplate example site).
Sitecore Demo Portal instance
From the portal, I can also get the Experience Edge endpoint and key:
Sitecore Demo Portal
Note: I’ve had just one thing to do and I’ll give feedback back to Sitecore on this, by default there is no publishing target for Experience Edge, even though it comes by default on the template, so I’ve to check the database name used in XM (it was just experienceedge) and then created a new publishing target.
The first thing is to check the layout service response gonna work as expected, so checked the GraphQL query to both XM and Experience Edge endpoints to make sure the sites were properly resolved.
All good, also checked that the site ‘multisite_poc_uk‘ is also working fine.
Now, with everything set, we can test this out locally. The first thing is to set the environment variables so those point to our Experience Edge instance.
If everything went well, you should be able to see that (check the logging we added in the getStaticProps previously).
UK Site
Global Site
Cool! both sites are properly resolved and the small change I’ve made to the content bock text confirms that.
Let’s now run npm run next:build so we test the SSG:
npm run next:build
Deploying to Vercel
We’re all set to get this deployed and tested in Vercel, exciting!
I won’t go through the details on how to deploy to Vercel as I’ve already written a post about it, so for details please visit this post!
Couple of things to take into account:
I don’t push my .env file to the GitHub repo, so I’ve set all the environment variables in Vercel itself.
I’ve created 2 new custom domains to test this. Doing that is really straightforward, in Vercel got to the project settings, and domains and create those:
Vercel custom domains
I’ve pushed the changes to my GitHub repo that is configured in Vercel so a deployment just got triggered, check build/deployment logs and the output!
Looking good! let’s try out the custom domains now:
In this post I’d like to share a topic that we’ve presented together with my friend Ehsan Aslani during the Sitecore User Group France, an event that I’ve also organized with my friends Ugo Quaisse and Ramkumar Dhinakaran in Paris at the Valtech offices, find more details and pictures about the event here.
About Edge Middleware
At the time we presented this topic in the UG, the Edge Functions in Vercel were in beta version, now we got the good news from Vercel that they released Next.js version 12.2 that includes Middleware stable among other amazing new experimental features like:
On top of this new release, Vercel also introduced a new concept that makes a little bit of confusion around it, Edge Functions != Edge Middleware. In the previous version, the middleware was deployed to Vercel as an Edge Function, while now it’s a “Middleware Edge”.
Edge Functions (still in beta)
Vercel Edge Functions allow you to deliver content to your site’s visitors with speed and personalization. They are deployed globally by default on Vercel’s Edge Network and enable you to move server-side logic to the Edge, close to your visitor’s origin.
Edge Functions use the Vercel Edge Runtime, which is built on the same high-performance V8 JavaScript and WebAssembly engine that is used by the Chrome browser. By taking advantage of this small runtime, Edge Functions can have faster cold boots and higher scalability than Serverless Functions.
Edge Functions run after the cache, and can both cache and return responses.
Edge Functions
Middleware Functions
Edge Middleware is code that executes before a request is processed on a site. Based on the request, you can modify the response. Because it runs before the cache, using Middleware is an effective way of providing personalization to statically generated content. Depending on the incoming request, you can execute custom logic, rewrite, redirect, add headers, and more, before returning a response.
Edge Middleware allows you to deliver content to your site’s visitors with speed and personalization. They are deployed globally on Vercel’s Edge Network and enable you to move server-side logic to the Edge, close to your visitor’s origin.
Middleware uses the Vercel Edge Runtime, which is built on the same high-performance V8 JavaScript and WebAssembly engine that is used by the Chrome browser. The Edge Runtime exposes and extends a subset of Web Standard APIs such FetchEvent, Response, and Request, to give you more control over how you manipulate and configure a response, based on the incoming requests. To learn more about writing Middleware, see the Middleware API guide.
Edge Middleware
Benefits of Edge Functions
Reduced latency: Code runs geographically close to the client. A request made in London will be processed by the nearest edge node to London, instead of Washington, USA.
Speed and agility: Edge Functions use Edge Runtime, which, due to its smaller API surface, allows for a faster startup than Node.js
Personalized content: Serve personalized cached content based on attributes such as visitor location, system language, or cookies
About nested middleware in beta
With the stable release of middleware in Next.js v12.2, nested middleware is not supported anymore, details here.
In the beta version, it was possible to create different “_middleware.ts” files under specific folders so we can control when those are executed. Now, only one file at the app’s root is allowed and we need to add some logic to handle that by checking the parsed URL, like:
// <root>/middleware.ts
import type { NextRequest } from 'next/server'
export function middleware(request: NextRequest) {
if (request.nextUrl.pathname.startsWith('/about')) {
// This logic is only applied to /about
}
if (request.nextUrl.pathname.startsWith('/dashboard')) {
// This logic is only applied to /dashboard
}
}
In the demo I’ve prepared for the UG, I used edge functions for doing a bit of geolocation, playing with cookies, A/B testing, rewrites, and feature-flag enablement.
To start I’ve just created an empty project using the nextjs CLI:
npx create-next-app@latest --typescript
Then, inside the recently created app folder, and then check localhost:
We are all set to start testing out the middleware, for doing that, we get started by creating a new file in the root folder, name it “middleware.ts”. Let’s add some code there to test how it works:
This will just simply add a response header and return it, refresh your browser and check the headers, our recently added “x-sug-country” should be there:
For this demo, I’ve created some simple pages:
– Pages
|- about
|- aboutnew
|- index
|- featureflag
|- featureflags
|- abtest
|- index
A/B Testing
The idea was to do some A/B testing on the about page, so for doing that I’ve used ConfigCat an easy-to-use tool for managing feature-flags enablement, that also has some options to target audience, so I’ve created my “newAboutPage” flag with 50% option:
The following code is what we need to place in our middleware, and it will basically get the value flag value from ConfigCat and store it in a cookie. Note the usage here of URL Redirects and Rewrites, cookies management, feature flags, and of course, A/B testing, all running as a middleware function, that when deployed to Vercel will be executed on the edge network, close to the visitor origin, with close to zero latency.
export function middleware (req: NextRequest) {
if (req.nextUrl.pathname.startsWith('/about')) {
const url = req.nextUrl.clone()
// Redirect paths that go directly to the variant
if (url.pathname != '/about') {
url.pathname = '/about'
return NextResponse.redirect(url)
}
const cookie = req.cookies.get(ABOUT_COOKIE_NAME) || (getValue('newaboutpage') ? '1' : '0')
url.pathname = cookie === '1' ? '/about/aboutnew' : '/about'
const res = NextResponse.rewrite(url)
// Add the cookie if it's not there
if (!req.cookies.get(ABOUT_COOKIE_NAME)) {
res.cookies.set(ABOUT_COOKIE_NAME, cookie)
}
return res
}
...
Let’s test it and by clicking on “Remove Cookie and Reload” you’ll be getting both variants with 50% probability:
Feature flags
In the demo, I’ve also added the features flag page where I’m rendering or hiding some components depending on their flag enablement, again by using ConfigCat for this:
The “sugconfr” flag, that you can see it’s disabled, and the “userFromFrance” that also checks the country parameter and only returns true if it’s France, so we can see here how easy we can personalize based on geolocation.
Let’s have a look at the code we’ve added to the middleware:
export function middleware (req: NextRequest) {
if (req.nextUrl.pathname.startsWith('/about')) {
...
}
if (req.nextUrl.pathname.startsWith('/featureflag')) {
const url = req.nextUrl.clone()
// Fetch user Id from the cookie if available
const userId = req.cookies.get(COOKIE_NAME_UID) || crypto.randomUUID()
const country = req.cookies.get(COOKIE_NAME_COUNTRY) || req.geo?.country
const sugfr = req.cookies.get(COOKIE_NAME_SUGFR) || (getValue(COOKIE_NAME_SUGFR) ? '1' : '0')
const res = NextResponse.rewrite(url)
// Add the cookies if those are not there
if (!req.cookies.get(COOKIE_NAME_COUNTRY)) {
res.cookies.set(COOKIE_NAME_COUNTRY, country)
}
if (!req.cookies.get(COOKIE_NAME_UID)) {
res.cookies.set(COOKIE_NAME_UID, userId)
}
if (!req.cookies.get(COOKIE_NAME_SUGFR)) {
res.cookies.set(COOKIE_NAME_SUGFR, sugfr)
}
return res
}
Again, we get the values and store it in cookies. Then we use the feature flags to show or hide some components, as we can see here:
If we load the page with the “sugconfr” disabled, we will get this:
So, let’s enable it back from ConfigCat, publish the changes and reload the page:
Now the page looks different, the SUGFR component is showing up. As you can see, the other component where we have chosen to enable only for users coming from France is still not showing, this is because we are testing from localhost so of course, there is no geolocation data coming from the request. So, let’s deploy this app to Vercel so we can also test this part and check how it looks running in the Edge.
Note: make sure you add the ConfigCat API Key to the environment variables in Vercel before deploying:
If you have a look at the deployment logs, you will see that it created the edge function based on our middleware:
If we check the site now, as we are now getting geolocation data from the user’s request, the component is showing up there:
You can check logs by going to functions sections from the Vercel dashboard, which is really cool for troubleshooting purposes:
This was just a quick example of how to start using this middleware feature from Next.js and Vercel’s Edge Network, which enable us to move some backend code from the server to the edge, making those calls super fast with almost no latency. Now that is already stable we can start implementing those for our clients, there are multiple usages, another quick example where we can implement those are for resolving multisite by hostnames for our Sitecore JSS/Next.js implementation.
You can find the example app code here in this GitHub repo. The app is deployed to Vercel and accessible here.
In this post I’ll be showing an approach to convert existing Sitecore MVC applications to the Jamstack architecture, it’s time to think about how to modernize our old-fashioned Sitecore apps to benefit from modern tech stacks capabilities like headless, SSG, ISR, multi-channel, etc.
Architecture
Jamstack architecture for existing Sitecore MVC sites is possible because of the ability of the Sitecore Layout Service to render MVC components to HTML, and include them in its output.
The publishing and rendering process consists of the following steps:
The Layout Service outputs MVC components as HTML, embedded in its usual service output.
The Layout Service output is published to the Content Delievry with each page/route, allowing it to be queried by Sitecore headless SDKs such as Next.js.
The Next.js application queries the Layout Service output for the route and passes it into one or more placeholder components.
Based on the lack of a componentName property in the layout data, the Placeholder component in the Sitecore Next.js SDK renders the Sitecore component directly as HTML into the pre-rendered document.
Prerequisites
Sitecore version 10.2+ – An upgrade of your MVC application would be needed.
To make things easier for this demo, I’m using the “Basic Company – Unicorn” site from the Sitecore Helix examples, you can find the repo here.
The first step is to upgrade the solution to 10.2, you can also find my open PR with the upgrade here.
Then, we need to add the Headless Services to our CM and CD images. You can find the final code here, which also adds on top of it a Next.js RH app image.
At this point, we have our MVC application up and running on Sitecore 10.2 and Headless Services are also installed. We are now ready to start making some changes to the app so we can make it work with JSS.
Prepare the MVC site to be compatible with JSS App
First of all, we need to create our API key in order to allow the Layout Service to communicate through our Sitecore instance. For that, we simply create an item under /sitecore/system/Settings/Services/API Keys
Make sure to keep CORS and controllers to allow * for this demo
To enable editing and static generation support in the JSS app, we have to make the site root to inherit from /sitecore/templates/Foundation/JavaScript Services/App template:
To enable editing support, we need the layout to inherit the template /sitecore/templates/Foundation/JavaScript Services/JSS Layout .
Now, we need to configure the Layout Service Placeholders field. This field determines which placeholder information to include in the Layout Service response data.
Inspect the Layout Service reponse
We can have a look now and analyze the Json we are getting from the Layout Service by visiting the endpoint:
We can see in the response, that we are getting the placeholders we included previously (main, header and footer).
Configure the Sitecore Layout Service to output HTML for MVC renderings
Let’s now go and configure the “Hero Banner” component to render HTML instead of Json:
Done, let’s publish this change and see what we get in the Layout Service response for this component:
So, here we go, we can find the HTML now in the contents. Let’s enable the HTML output on all the other MVC renderings and publish those changes, in the meantime, let’s create our JSS app.
Create the Nextjs JSS app
Let’s open a terminal and navigate to the src folder (..\examples\helix-basic-unicorn\src\Project\BasicCompany). We now run the JSS CLI command to create a new app, here we can choose if we want to fetch data with REST or GraphQL, also the prerendering on SSG or SSR:
The JSS app is now created. Let’s set it up and connect it to our Sitecore instance. Run the following CLI command:
cd basic-company
jss setup
Provide the following values:
1- Is your Sitecore instance on this machine or accessible via network share? [y/n]: y 2- Path to the Sitecore folder (e.g. c:\inetpub\wwwroot\my.siteco.re): ..\examples\helix-basic-unicorn\docker\deploy\website 3- Sitecore hostname (e.g. http://myapp.local.siteco.re; see /sitecore/config; ensure added to hosts): https://www.basic-company-unicorn.localhost/ 4- Sitecore import service URL [https://www.basic-company-unicorn.localhost/sitecore/api/jss/import]: 5- Sitecore API Key (ID of API key item): {B10DB745-2B8A-410E-BDEC-07791190B599} 6- Please enter your deployment secret (32+ random chars; or press enter to generate one):
Now we can deploy the config (check the files creates under sitecore/config). For this we run the following CLI command
jss deploy config
Prepare the NextJs App to render our content
Let’s update the Layout.tsx to add our placeholders (header, main, footer):
Also copy the “basic-company.css” from the website folder into the “src/assets” folder and update the _app.tsx with this :
All good, time to connect and test it! Run the following CLI command:
jss start:connected
HTTP://localhost:3000
Yay! visit http://localhost:3000 and you can see the Basic Company MVC site rendered as a JSS App, this is ready to be deployed and make it statically generated, but let’s move one step forward and start converting one of the components to React, as I see this approach to incrementally start your migration to JSS (React).
Experience Editor compatibility
Let’s double-check that our Experience Editor is still working as expected:
Start converting components from MVC (C#/Razor) to Next.js (JavaScript/React) incrementally
Let’s duplicate the “Hero Banner” rendering in Sitecore, change the template to make it a “Json Rendering”, rename it to “HeroBanner” to make it compliant with React naming conventions, and disable the “Render as HTML” checkbox. Also, make sure the “component name” field is set to “HeroBanner”. Then add this new component to the Homepage next to the MVC one.
Duplicated HeroBanner component
Publish the rendering and check again the Layout Service response, now, you should be able to see the two versions of the component, the one in HTML and the Json:
Good! We got the expected results on the Layout Service response, if we go now and refresh our JSS App, we will see that the component is added but still lacking its React implementation:
Create the React component through the component scaffolding
To create the React implementation of the component we created, just run the following in the terminal (always from the JSS App root):
jss scaffold BasicContent/HeroBanner
Have a look at the files created, make some changes to the React implementation (BasicContent/HeroBanner.tsx)
Now both MVC and React components are living on the same site, I kept both to make it more visual, but the proper way of migrating would be just replacing the MVC rendering.
I hope you find this interesting, you can find the complete solution here, it’s a fork of the Sitecore Helix Examples and added on top the headless services, Sitecore 10.2 upgrade, a NextJS rendering host, and app.
Next.js allows you to create or update static pages after you’ve built your site. Incremental Static Regeneration (ISR) enables developers and content editors to use static-generation on a per-page basis, without needing to rebuild the entire site. With ISR, you can retain the benefits of static while scaling to millions of pages.
Static pages can be generated at runtime (on-demand) instead of at build-time with ISR. Using analytics, A/B testing, or other metrics, you are equipped with the flexibility to make your own tradeoff on build times.
Consider an e-commerce store with 100,000 products. At a realistic 50ms to statically generate each product page, the build would take almost 2 hours without ISR. With ISR, we can choose from:
Faster Builds → Generate the most popular 1,000 products at build-time. Requests made to other products will be a cache miss and statically generate on-demand: 1-minute builds.
Higher Cache Hit Rate → Generate 10,000 products at build-time, ensuring more products are cached ahead of a user’s request: 8-minute builds.
Exploring ISR
In my previous post, I’ve created a JSS-Next.js app that we deployed to Vercel. I also created a WebHook to trigger a full rebuild in Vercel (SSG). Now, I’ll explain how the ISR works in this same app.
Fetching Data and Generating Paths
Data:
ISR uses the same Next.js API to generate static pages: getStaticProps. By specifying revalidate: 5, we inform Next.js to use ISR to update this page after it’s generated.
Check the src/pages/[[…path]].tsx file and the getStaticProps function:
Paths:
Next.js defines which pages to generate at build-time based on the paths returned by getStaticPaths. For example, you can generate the most popular 1,000 products at build-time by returning the paths for the top 1,000 product IDs in getStaticPaths.
With this configuration, I’m telling Next.js to enable ISR and to revalidate every 5 sec. After this time period, the first user making the request will receive the old static version of the page and trigger the revalidation behind the scenes.
The Flow
Next.js can define a revalidation time per-page (e.g. 5 seconds).
The initial request to the page will show the cached page.
The data for the page is updated in the CMS.
Any requests to the page after the initial request and before the 5 seconds window will show the cached (hit) page.
After the 5 second window, the next request will still show the cached (stale) page. Next.js triggers a regeneration of the page in the background.
Once the page has been successfully generated, Next.js will invalidate the cache and show the updated product page. If the background regeneration fails, the old page remains unaltered.
Page Routing
Here’s a high-level overview of the routing process:
In the diagram above, you can see how the Next.js route is applied to Sitecore JSS.
The [[…path]].tsx Next.js route will catch any path and pass this information along to getStaticProps or getServerSideProps on the context object. The Page Props Factory uses the path information to construct a normalized Sitecore item path. It then makes a request to the Sitecore Layout Service REST API or Sitecore GraphQL Edge schema to fetch layout data for the item.
Demo!
So, back to our previously deployed app in Vercel, login to Sitecore Content Editor and make a change on a field. I’m updating the heading field (/sitecore/content/sitecoreverceldemo/home/Page Components/home-jss-main-ContentBlock-1) by adding “ISR Rocks!”. We save the item and refresh the page deployed on Vercel. (Don’t publish! this will trigger the webhook that is defined in the publish:end event).
After refreshing the page, I can still see the old version:
But, if I keep checking what is going on in the ngrok, I can see the requests made to the layout service:
So, after refreshing again the page, I can see the changes there!
So, it got updated without the need of rebuilding and regenerating the whole site.
That’s it! I hope this post helps to understand how the ISR works and how to start with it on your Sitecore JSS implementation.
Thanks for reading and stay tuned for more Sitecore stuff!
In my previous posts about images cropping, I’ve used Azure Cognitive Services (Vision) for managing media cropping in a smart way. Now, I’m sharing another usage of Azure Cognitive Services (Language) for building a Powershell tool that makes possible to translate your Sitecore content in a quick and easy way.
Handling item versioning and translation from the Sitecore content editor is a kinda tedious work for editors, especially when it comes to manually creating localized content for your site.
The idea of the PSE tool is to make the editor’s life easier, so in several clicks can achieve the language version creation of the item (including subitems and datasources) and also populate the items with translated content!
Azure Translator – An AI service for real-time text translation
Translator is a cloud-based machine translation service you can use to translate text in near real-time through a simple REST API call. The service uses modern neural machine translation technology and offers statistical machine translation technology. Custom Translator is an extension of Translator, which allows you to build neural translation systems. The customized translation system can be used to translate text with Translator or Microsoft Speech Services. For more info please refer to the official documentation.
About the tool
As I mentioned before, this tool is based on SPE, so it’s easy to integrate on your Sitecore instance. I’ll share the full implementation details but also the code and packages. The service API layer has been implemented on .NET.
The context menu script
Demo
Creating the Azure service
Before proceeding with the implementation, let’s see how to create the Translator service in Azure. The steps are very straightforward as usual when creating such resources.
That’s it! You have your translator service created, now just take a look at the keys and endopint section, you will need it for updating in your config file:
Keys and Endopint
Service implementation (C#)
TranslatorService.cs
This is the service that communicates with the Azure API, it’s quite basic and straightforward, you can also find examples and documentation in the official sites.
using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json;
using Sitecore.Cognitive.Translator.PSE.Caching;
using Sitecore.Cognitive.Translator.PSE.Models;
using Sitecore.Configuration;
namespace Sitecore.Cognitive.Translator.PSE.Services
{
public class TranslatorService : ITranslatorService
{
private readonly string _cognitiveServicesKey = Settings.GetSetting($"Sitecore.Cognitive.Translator.PSE.TranslateService.ApiKey", "");
private readonly string _cognitiveServicesUrl = Settings.GetSetting($"Sitecore.Cognitive.Translator.PSE.TranslateService.ApiUrl", "");
private readonly string _cognitiveServicesZone = Settings.GetSetting($"Sitecore.Cognitive.Translator.PSE.TranslateService.ApiZone", "");
public async Task<TranslationResult[]> GetTranslatation(string textToTranslate, string fromLang, string targetLanguage, string textType)
{
return await CacheManager.GetCachedObject(textToTranslate + fromLang + targetLanguage + textType, async () =>
{
var route = $"/translate?api-version=3.0&to={targetLanguage}&suggestedFrom=en";
if (!string.IsNullOrEmpty(fromLang))
{
route += $"&from={fromLang}";
}
if (!string.IsNullOrEmpty(textType) && textType.Equals("Rich Text"))
{
route += "&textType=html";
}
var requestUri = _cognitiveServicesUrl + route;
var translationResult = await TranslateText(requestUri, textToTranslate);
return translationResult;
});
}
async Task<TranslationResult[]> TranslateText(string requestUri, string inputText)
{
var body = new object[] { new { Text = inputText } };
var requestBody = JsonConvert.SerializeObject(body);
using (var client = new HttpClient())
using (var request = new HttpRequestMessage())
{
request.Method = HttpMethod.Post;
request.RequestUri = new Uri(requestUri);
request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
request.Headers.Add("Ocp-Apim-Subscription-Key", _cognitiveServicesKey);
request.Headers.Add("Ocp-Apim-Subscription-Region", _cognitiveServicesZone);
var response = await client.SendAsync(request).ConfigureAwait(false);
var result = await response.Content.ReadAsStringAsync();
var deserializedOutput = JsonConvert.DeserializeObject<TranslationResult[]>(result);
return deserializedOutput;
}
}
}
}
The code is simple, I’m just adding a caching layer on top to avoid repeated calls to the API.
You can check the full parameters list in the official documentation, but let me just explain the ones I used:
api-version (required): Version of the API requested by the client. Value must be 3.0.
to (required): Specifies the language of the output text. The target language must be one of the supported languages included in the translation scope.
from (optional): Specifies the language of the input text. Find which languages are available to translate from by looking up supported languages using the translation scope. If the from parameter is not specified, automatic language detection is applied to determine the source language.
textType (optional): Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: plain (default) or html. In this case, I’m passing the HTML when is translating from a Rich Text field.
We need also to create the models where the data is parsed into (TranslationResult), I’m not adding the code here to make it simple, but you can check the source code for full details.
TranslationExtensions.cs
using System.Linq;
using System.Threading.Tasks;
using Sitecore.Cognitive.Translator.PSE.Services;
using Microsoft.Extensions.DependencyInjection;
using Sitecore.DependencyInjection;
namespace Sitecore.Cognitive.Translator.PSE.Extensions
{
public class TranslationExtensions
{
private readonly ITranslatorService _translatorService;
public TranslationExtensions(ITranslatorService translatorServices)
{
_translatorService = translatorServices;
}
public TranslationExtensions()
{
_translatorService = ServiceLocator.ServiceProvider.GetService<ITranslatorService>();
}
public async Task<string> TranslateText(string input, string fromLang, string destLang, string textType)
{
var res = await _translatorService.GetTranslatation(input, fromLang, destLang, textType);
if (res != null && res.Any() && res[0].Translations.Any())
{
return res[0].Translations[0].Text;
}
return string.Empty;
}
}
}
We need basically one main script to be added in the context menu (Add Language Version and Translate) and then few functions that has been written in this way to make it more readable and modular.
Add Language Version and Translate
Import-Function GetLanguages
Import-Function GetItems
Import-Function ConfirmationMessage
Import-Function Translate
Import-Function GetUserOptions
Import-Function GetUserFieldsToTranslate
Import-Function ConfirmationMessage
# Global variables
$location = get-location
$currentLanguage = [Sitecore.Context]::Language.Name
$langOptions = @{}
$destinationLanguages = @{}
$options = @{}
# Variables from user input - Custom Object
$userOptions = [PSCustomObject]@{
'FromLanguage' = $currentLanguage
'ToLanguages' = @()
'IncludeSubitems' = $false
'IncludeDatasources' = $false
'IfExists' = "Skip"
'FieldsToTranslate' = @()
}
# Get language options
GetLanguages $langOptions $destinationLanguages
# Ask user for options
$result = GetUserOptions $currentLanguage $langOptions $destinationLanguages $userOptions
if($result -ne "ok") {
Write-Host "Canceling"
Exit
}
# Get all items
$items = @()
$items = GetItems $location $userOptions.IncludeSubitems $userOptions.IncludeDatasources
# Ask user for fields to translate
$dialogResult = GetUserFieldsToTranslate $items $options $userOptions
if($dialogResult -ne "OK") {
Write-Host "Canceling"
Exit
}
# Ask user for confirmation
$proceed = ConfirmationMessage $items.Count $options $userOptions
if ($proceed -ne 'yes') {
Write-Host "Canceling"
Exit
}
# Call the translator service
Translate $items $userOptions
GetLanguages
function GetLanguages {
[CmdletBinding()]
param($langOptions, $destinationOptions)
$user = Get-User -Current
$languages = Get-ChildItem "master:\sitecore\system\Languages"
$currentLanguage = [Sitecore.Context]::Language.Name
# Get list of languages with writting rights and remove the origin language
foreach ($lang in $languages) {
$langOptions[$lang.Name] = $lang.Name
if (Test-ItemAcl -Identity $user -Path $lang.Paths.Path -AccessRight language:write) {
$destinationOptions[$lang.Name] = $lang.Name
}
}
$destinationOptions.Remove($currentLanguage)
}
GetUserOptions
function GetUserOptions {
[CmdletBinding()]
param($currentLanguage, $langOptions, $destinationLanguages, [PSCustomObject]$userOptions)
# Version overwritting options
$ifExistsOpts = @{};
$ifExistsOpts["Append"] = "Append";
$ifExistsOpts["Skip"] = "Skip";
$ifExistsOpts["Overwrite"] = "OverwriteLatest";
$result = Read-Variable -Parameters `
@{ Name = "fLang"; Value=$currentLanguage; Title="From Language"; Options=$langOptions; },
@{ Name = "tLang"; Title="Destination Languages"; Options=$destinationLanguages; Editor="checklist"; },
@{ Name = "iSubitems"; Value=$false; Title="Include Subitems"; Columns = 4;},
@{ Name = "iDatasources"; Value=$false; Title="Include Datasources"; Columns = 4 },
@{ Name = "iExist"; Value="Skip"; Title="If Language Version Exists"; Options=$ifExistsOpts; Tooltip="Append: Create new language version and translate content.<br>" `
+ "Skip: skip it if the target has a language version.<br>Overwrite Latest: overwrite latest language version with translated content."; } `
-Description "Select a the from and target languages with options on how to perform the translation" `
-Title "Add Language and Translate" -Width 650 -Height 660 -OkButtonName "Proceed" -CancelButtonName "Cancel" -ShowHints
$userOptions.FromLanguage = $fLang
$userOptions.ToLanguages += $tLang
$userOptions.IncludeSubitems = $iSubitems
$userOptions.IncludeDatasources = $iDatasources
$userOptions.IfExists = $iExist
return $result
}
GetItems
function GetItems {
[CmdletBinding()]
param($location, $includeSubitems, $includeDatasources)
Import-Function GetItemDatasources
$items = @()
$items += Get-Item $location
# add subitems
if ($includeSubitems) {
$items += Get-ChildItem $location -Recurse
}
# add datasources
if ($includeDatasources) {
Foreach($item in $items) {
$items += GetItemDatasources($item)
}
}
# Remove any duplicates, based on ID
$items = $items | Sort-Object -Property 'ID' -Unique
return $items
}
GetFields
function GetFields {
[CmdletBinding()]
param($items, $options)
Import-Function GetTemplatesFields
Foreach($item in $items) {
$fields += GetTemplatesFields($item)
}
# Remove any duplicates, based on ID
$fields = $fields | Sort-Object -Property 'Name' -Unique
# build the hashtable to show as checklist options
ForEach ($field in $fields) {
$options.add($field.Name, $field.ID.ToString())
}
return $fields
}
function Translate {
[CmdletBinding()]
param($items, [PSCustomObject]$userOptions)
Write-Host "Proceeding with execution..."
# Call the translator service
$translatorService = New-Object Sitecore.Cognitive.Translator.PSE.Extensions.TranslationExtensions
$items | ForEach-Object {
$currentItem = $_
foreach($lang in $userOptions.ToLanguages) {
Add-ItemLanguage $_ -Language $userOptions.FromLanguage -TargetLanguage $lang -IfExist $userOptions.IfExists
Write-Host "Item : '$($currentItem.Name)' created in language '$lang'"
Get-ItemField -Item $_ -Language $lang -ReturnType Field -Name "*" | ForEach-Object{
# Only look within Single-line and Rich Text fields that has been choosen in the dialog box
if(($_.Type -eq "Single-Line Text" -or $_.Type -eq "Rich Text" -or $_.Type -eq "Multiline Text") -and $userOptions.FieldsToTranslate.Contains($_.ID.ToString())) {
if (-not ([string]::IsNullOrEmpty($_))) {
# Get the item in the target created language
$langItem = Get-Item -Path "master:" -ID $currentItem.ID -Language $lang
# Get the translated content from the service
$translated = $translatorService.TranslateText($currentItem[$_.Name], $userOptions.FromLanguage, $lang, $_.Type)
# edit the item with the translated content
$langItem.Editing.BeginEdit()
$langItem[$_.Name] = $translated.Result
$langItem.Editing.EndEdit()
Write-Host "Field : '$_' translated from '$($userOptions.FromLanguage)'" $currentItem[$_.Name] " to : '$lang'" $translated.Result
}
}
}
}
}
}
In the Translate function, I’m doing the call to the API (Sitecore.Cognitive.Translator.PSE.Extensions.TranslationExtensions).
That’s very much it, now is time to test it! If everything went well, you will be able to add language versions to your items with also translated content from Azure Cognitive Translation.
Let’s see this in action!
For the purpose of this demo, I’ve created a simple content tree with 3 levels, the items has some content in english (plain and HTML) and I’ll be using the tool to create the Spanish-Argentina and French-France versions + translated content.
1- Click on the Home item and choose the Add Language Version and Translate option from the scripts section.
2- Choose the options, in this case I want to translate from the default ‘en‘ language to both ‘es-AR‘ and ‘fr-FR‘. Also I want to include the subitems, but as for this test the items doesn’t have a presentation nor datasources, I’m keeping this disabled. No versions in the target language exist for those items, so I’m keeping the “Skip” option.
3- Click on proceed and choose the fields you want to translate:
I’m selecting all fields, as you can check in the SPE code, I’m removing the standard fields from the items to be translated, normally you don’t want that and it will overpopulate the fields list.
4- Click OK, double check the data entered and click the OK button for making the magic to happen:
5- Click on the View script results link to check the output logs:
6- Check that the items have been created in the desired languages and the contents are already translated. Review them, publish and have a cup of coffee :).
fr-FR items version:
es-AR items version:
Voila! After few clicks you have your content items created in the language version with the content translated, I hope you like it us much as I do.
Find the source code in GitHub, download the Sitecore package here or get the asset image from Docker Hub.
In my previous post, I’ve explained how to configure the Blob Storage Module on a Sitecore 9.3+ instance. The following post assumes you are already familiar with it and you’ve your Sitecore instance making use of the Azure blob storage provider.
In this post I’ll show you how we can make use of Azure Functions (blob trigger) to optimize (compress) images on the fly, when those are uploaded to the media library, in order to gain performance and with a serverless approach.
Media Compression Flow
About Azure Functions and Blob Trigger
Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in Azure or third party service as well as on-premises systems. Azure Functions allows developers to take action by connecting to data sources or messaging solutions thus making it easy to process and react to events. Developers can leverage Azure Functions to build HTTP-based API endpoints accessible by a wide range of applications, mobile and IoT devices. Azure Functions is scale-based and on-demand, so you pay only for the resources you consume. For more info please refer to the official MS documentation.
Azure Functions
Azure Functions integrates with Azure Storage via triggers and bindings. Integrating with Blob storage allows you to build functions that react to changes in blob data as well as read and write values.
Creating the Azure Function
For building the blob storage trigger function I’ll be using Visual Code, so first of all make sure you have the Azure Functions plugin for Visual Code, you can get it from the marketplace or from the extensions menu, also from the link: vscode:extension/ms-azuretools.vscode-azurefunctions.
Azure Functions Plugin
Before proceeding, make sure you are logged into your Azure subscription. >az login.
Create an Azure Functions project: Click on the add function icon and then select the blob trigger option, give a name to the function.
2. Choose the Blob Storage Account you are using in your Sitecore instance (myblobtestazure_STORAGE in my case).
3. Choose your blob container path (blobcontainer/{same})
4. The basics are now created and we can start working on our implementation.
Default function class
Generated project files
The project template creates a project in your chosen language and installs required dependencies. For any language, the new project has these files:
host.json: Lets you configure the Functions host. These settings apply when you’re running functions locally and when you’re running them in Azure. For more information, see host.json reference.
local.settings.json: Maintains settings used when you’re running functions locally. These settings are used only when you’re running functions locally. For more information, see Local settings file.
Edit the local.settgins.json file to add the connection string of your blob storage:
local.settings.json
The function implementation
using System.IO;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using ImageMagick;
using Microsoft.WindowsAzure.Storage.Blob;
namespace SitecoreImageCompressor
{
public static class CompressBlob
{
[FunctionName("CompressBlob")]
public static async void Run([BlobTrigger("blobcontainer/{name}", Connection = "myblobtestazure_STORAGE")] CloudBlockBlob inputBlob, ILogger log)
{
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{inputBlob.Name} \n Size: {inputBlob.Properties.Length} Bytes");
if (inputBlob.Metadata.ContainsKey("Status") && inputBlob.Metadata["Status"] == "Processed")
{
log.LogInformation($"blob: {inputBlob.Name} has already been processed");
}
else
{
using (var memoryStream = new MemoryStream())
{
await inputBlob.DownloadToStreamAsync(memoryStream);
memoryStream.Position = 0;
var before = memoryStream.Length;
var optimizer = new ImageOptimizer { OptimalCompression = true, IgnoreUnsupportedFormats = true };
if (optimizer.IsSupported(memoryStream))
{
var compressionResult = optimizer.Compress(memoryStream);
if (compressionResult)
{
var after = memoryStream.Length;
var gain = 100 - (float)(after * 100) / before;
log.LogInformation($"Optimized {inputBlob.Name} - from: {before} to: {after} Bytes. Optimized {gain}%");
await inputBlob.UploadFromStreamAsync(memoryStream);
}
else
{
log.LogInformation($"Image {inputBlob.Name} - compression failed...");
}
}
else
{
var info = MagickNET.GetFormatInformation(new MagickImageInfo(memoryStream).Format);
log.LogInformation($"Image {inputBlob.Name} - the format is not supported. Compression skipped - {info.Format}");
}
}
inputBlob.Metadata.Add("Status", "Processed");
await inputBlob.SetMetadataAsync();
}
}
}
}
As you can see, I’m creating and async task that will be triggered as soon as a new blob is added to the blob storage. Since we’re compressing and then uploading the modified image, we’ve to make sure the function is not triggered multiple times. For avoiding that, I’m also updating the image metadata with a “Status = Processed“.
The next step is to get the image from the CloudBlockBlob and then compress using the Magick.NET library. Please note that this library also provides a LosslessCompress method, for this implementation I choose to go with the full compression. Feel free to update and compare the results.
Nuget references
So, in order to make it working we need to install the required dependencies. Please run the following commands to install the Nuget packages:
Now we have everything in place. Let’s press F5 and see if the function is compiling
Terminal output
We are now ready to deploy to Azure and test the blob trigger! Click on the up arrow in order to deploy to Azure, choose your subscription and go!
Azure publish
Check the progress in the terminal and output window:
Testing the trigger
Now we can go to the Azure portal, go to the Azure function and double check that everything is there as expected:
Azure function from the portal
Go to the “Monitor” and click on “Logs” so we can have a look at the live stream when uploading an image to the blob storage. Now in your Sitecore instance, go to the Media Library and upload an image, this will upload the blob to the Azure Storage and the trigger will take place and compress the image.
Media Library Upload
Azure functions logs
As we can see in the logs the image got compressed, gaining almost 15%:
In this post I’m explaining how to switch the blob storage provider to make use of Azure Blob Storage. Before Sitecore 9.3, we could store the blobs on the DB or filesystem, Azure Blob Storage was not supported out of the box and even tough it was possible, it required some customizations to make it working, nowadays, since Sitecore 9.3 a module has been released and is very straightforward to setup, as you will see in this post.
By doing this we can significantly reduce costs and improve performance as the DB size won’t increase that much due to the media library items.
Introduction to Azure Blob storage
Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.
Blob storage is designed for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Writing to log files.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
Users or client applications can access objects in Blob storage via HTTP/HTTPS, from anywhere in the world. Objects in Blob storage are accessible via the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library.
For more info please refer here and also you can find some good documentation here.
Creating your blob storage resource
Azure Storage Account
Create the resource by following the wizard and then check the “Access Keys” section, you’ll need the “Connection string” later.
Connection String and keys
Configuring your Sitecore instance
There are basically three main option to install the blob storage module into your instance:
Install the Azure Blob Storage module in Sitecore PaaS.
Use the Sitecore Azure Toolkit:
Use a new Sitecore installation with Sitecore Azure Toolkit
Use an existing Sitecore installation with Sitecore Azure Toolkit
Use Sitecore in the Azure Marketplace (for new Sitecore installations only)
Install the Azure Blob Storage module on an on-premise Sitecore instance.
Manually install the Azure Blob Storage module in PaaS or on-premise.
This time I’ll be focusing in the last option, manually installing the module, doesn’t matter if it’s a PaaS or on-premise approach.
7. In the \App_Config\Modules\Sitecore.AzureBlobStorage\Sitecore.AzureBlobStorage.config file, ensure that <param name="blobcontainer"> is the name you gave to the container after creating the resource.
Let’s test it!
If everything went well, then we can just test it by uploading a media item to the Sitecore media library
Let’s have a look now at the Storage Explorer in the Azure portal
Here we go, the image is now uploaded into the Azure blob storage, meaning the config is fine and working as expected.
In my previous post I’ve shared the custom image field implementation that makes use of the Azure Computer Vision service in order to crop and generate the thumbnails using AI. Please before proceed with this reading, make sure you already went through the previous posts: Part I and Part II.
Now, I’ll be sharing the last, but not least part of this topic, how to make it working in the front-end side, the media request flow and so on.
Image request flow
The image request flow
So, the request flow is described in the following graph, basically follows the normal Sitecore flow but with the introduction of the Azure Computer Vision and Image Sharp to generate the proper cropping version of the image.
AICroppingProcessor
This custom processor will be overriding the Sitecore OOTB ThumbnailProcessor. It’s basically a copy from the original code with a customization to check the “SmartCropping” parameter from the image request.
using Sitecore.Diagnostics;
using Sitecore.Resources.Media;
using System;
using Microsoft.Extensions.DependencyInjection;
using System.IO;
using System.Linq;
using Sitecore.Computer.Vision.CroppingImageField.Services;
using Sitecore.DependencyInjection;
namespace Sitecore.Computer.Vision.CroppingImageField.Processors
{
public class AICroppingProcessor
{
private static readonly string[] AllowedExtensions = { "bmp", "jpeg", "jpg", "png", "gif" };
private readonly ICroppingService _croppingService;
public AICroppingProcessor(ICroppingService croppingService)
{
_croppingService = croppingService;
}
public AICroppingProcessor()
{
_croppingService = ServiceLocator.ServiceProvider.GetService<ICroppingService>();
}
public void Process(GetMediaStreamPipelineArgs args)
{
Assert.ArgumentNotNull(args, "args");
var outputStream = args.OutputStream;
if (outputStream == null)
{
return;
}
if (!AllowedExtensions.Any(i => i.Equals(args.MediaData.Extension, StringComparison.InvariantCultureIgnoreCase)))
{
return;
}
var smartCrop = args.Options.CustomOptions[Constants.QueryStringKeys.SmartCropping];
if (!string.IsNullOrEmpty(smartCrop) && bool.Parse(smartCrop))
{
Stream outputStrm;
outputStrm = Stream.Synchronized(_croppingService.GetCroppedImage(args.Options.Width, args.Options.Height, outputStream.MediaItem));
args.OutputStream = new MediaStream(outputStrm, args.MediaData.Extension, outputStream.MediaItem);
}
else if (args.Options.Thumbnail)
{
var transformationOptions = args.Options.GetTransformationOptions();
var thumbnailStream = args.MediaData.GetThumbnailStream(transformationOptions);
if (thumbnailStream != null)
{
args.OutputStream = thumbnailStream;
}
}
}
}
}
We need also to customize the MediaRequest to also take the “SmartCropping” parameter into account:
using Sitecore.Configuration;
using Sitecore.Diagnostics;
using Sitecore.Resources.Media;
using System.Web;
namespace Sitecore.Computer.Vision.CroppingImageField.Requests
{
using System.Collections.Specialized;
public class AICroppingMediaRequest : MediaRequest
{
private HttpRequest _innerRequest;
private MediaUrlOptions _mediaQueryString;
private MediaUri _mediaUri;
private MediaOptions _options;
protected override MediaOptions GetOptions()
{
var queryString = this.InnerRequest.QueryString;
if (queryString == null || queryString.Count == 0)
{
_options = new MediaOptions();
}
else
{
SetMediaOptionsFromMediaQueryString(queryString);
if (!string.IsNullOrEmpty(queryString.Get(Constants.QueryStringKeys.SmartCropping)))
{
SetCustomOptionsFromQueryString(queryString);
}
}
if (!this.IsRawUrlSafe)
{
if (Settings.Media.RequestProtection.LoggingEnabled)
{
string urlReferrer = this.GetUrlReferrer();
Log.SingleError(string.Format("MediaRequestProtection: An invalid/missing hash value was encountered. " +
"The expected hash value: {0}. Media URL: {1}, Referring URL: {2}",
HashingUtils.GetAssetUrlHash(this.InnerRequest.RawUrl), this.InnerRequest.RawUrl,
string.IsNullOrEmpty(urlReferrer) ? "(empty)" : urlReferrer), this);
}
_options = new MediaOptions();
}
return _options;
}
private void SetCustomOptionsFromQueryString(NameValueCollection queryString)
{
this.ProcessCustomParameters(_options);
if (!string.IsNullOrEmpty(queryString.Get(Constants.QueryStringKeys.SmartCropping))
&& !_options.CustomOptions.ContainsKey(Constants.QueryStringKeys.SmartCropping)
&& !string.IsNullOrEmpty(queryString.Get(Constants.QueryStringKeys.SmartCropping)))
{
_options.CustomOptions.Add(Constants.QueryStringKeys.SmartCropping, queryString.Get(Constants.QueryStringKeys.SmartCropping));
}
}
private void SetMediaOptionsFromMediaQueryString(NameValueCollection queryString)
{
MediaUrlOptions mediaQueryString = this.GetMediaQueryString();
_options = new MediaOptions()
{
AllowStretch = mediaQueryString.AllowStretch,
BackgroundColor = mediaQueryString.BackgroundColor,
IgnoreAspectRatio = mediaQueryString.IgnoreAspectRatio,
Scale = mediaQueryString.Scale,
Width = mediaQueryString.Width,
Height = mediaQueryString.Height,
MaxWidth = mediaQueryString.MaxWidth,
MaxHeight = mediaQueryString.MaxHeight,
Thumbnail = mediaQueryString.Thumbnail,
UseDefaultIcon = mediaQueryString.UseDefaultIcon
};
if (mediaQueryString.DisableMediaCache)
{
_options.UseMediaCache = false;
}
foreach (string allKey in queryString.AllKeys)
{
if (allKey != null && queryString[allKey] != null)
{
_options.CustomOptions[allKey] = queryString[allKey];
}
}
}
public override MediaRequest Clone()
{
Assert.IsTrue((base.GetType() == typeof(AICroppingMediaRequest)), "The Clone() method must be overridden to support prototyping.");
return new AICroppingMediaRequest
{
_innerRequest = this._innerRequest,
_mediaUri = this._mediaUri,
_options = this._options,
_mediaQueryString = this._mediaQueryString
};
}
}
}
This code is very straightforward, it will basically check if the “SmartCropping=true” parameter exists in the media request, and then executes the custom code to crop the image.
The “Get Thumbnails” method limitations
As we can see in the official documentation, there are some limitations on the thumbnail generator method.
Image file size must be less than 4MB.
Image dimensions should be greater than 50 x 50.
Width of the thumbnail must be between 1 and 1024.
Height of the thumbnail must be between 1 and 1024.
The most important one is that the width and height cannot exceed the 1024px, this is problematic as sometimes we need to crop on a bigger ratio.
So, in order to make it more flexible, I’m doing the cropping using the Graphics library but getting the focus point coordinates from the “Get Area Of Interest” API method:
using Sitecore.Data.Items;
using Microsoft.Extensions.DependencyInjection;
using System.IO;
using Sitecore.DependencyInjection;
using Sitecore.Resources.Media;
using System.Drawing;
using System.Drawing.Imaging;
using System.Drawing.Drawing2D;
namespace Sitecore.Computer.Vision.CroppingImageField.Services
{
public class CroppingService : ICroppingService
{
private readonly ICognitiveServices _cognitiveServices;
public CroppingService(ICognitiveServices cognitiveServices)
{
_cognitiveServices = cognitiveServices;
}
public CroppingService()
{
_cognitiveServices = ServiceLocator.ServiceProvider.GetService<ICognitiveServices>();
}
public Stream GetCroppedImage(int width, int height, MediaItem mediaItem)
{
using (var streamReader = new MemoryStream())
{
var mediaStrm = mediaItem.GetMediaStream();
mediaStrm.CopyTo(streamReader);
mediaStrm.Position = 0;
var img = Image.FromStream(mediaStrm);
// The cropping size shouldn't be higher than the original image
if (width > img.Width || height > img.Height)
{
Sitecore.Diagnostics.Log.Warn($"Media file is smaller than the requested crop size. " +
$"This can result on a low quality result. Please upload a proper image: " +
$"Min Height:{height}, Min Width:{width}. File: {mediaItem.DisplayName}, Path{mediaItem.MediaPath}", this);
}
// if the cropping size exceeds the cognitive services limits, get the focus point and crop
if (width > 1025 || height > 1024)
{
var area = _cognitiveServices.GetAreaOfImportance(streamReader.ToArray());
var cropImage = CropImage(img, area.areaOfInterest.X, area.areaOfInterest.Y, width, height);
return cropImage;
}
var thumbnailResult = _cognitiveServices.GetThumbnail(streamReader.ToArray(), width, height);
return new MemoryStream(thumbnailResult);
}
}
public string GenerateThumbnailUrl(int width, int height, MediaItem mediaItem)
{
var streamReader = MediaManager.GetMedia(mediaItem).GetStream();
{
using (var memStream = new MemoryStream())
{
streamReader.Stream.CopyTo(memStream);
var thumbnail = _cognitiveServices.GetThumbnail(memStream.ToArray(), width, height);
var imreBase64Data = System.Convert.ToBase64String(thumbnail);
return $"data:image/png;base64,{imreBase64Data}";
}
}
}
private Stream CropImage(Image source, int x, int y, int width, int height)
{
var bmp = new Bitmap(width, height);
var outputStrm = new MemoryStream();
using (var gr = Graphics.FromImage(bmp))
{
gr.InterpolationMode = InterpolationMode.HighQualityBicubic;
using (var wrapMode = new ImageAttributes())
{
wrapMode.SetWrapMode(WrapMode.TileFlipXY);
gr.DrawImage(source, new Rectangle(0, 0, bmp.Width, bmp.Height), x, y, width, height, GraphicsUnit.Pixel, wrapMode);
}
}
bmp.Save(outputStrm, source.RawFormat);
return outputStrm;
}
}
}
Let’s see this in action!
After picking your picture in the AI Cropping Image field, it gets already cropped and you can see the different thumbnails. You can choose or change the thumbnails by updating the child items here: /sitecore/system/Settings/Foundation/Vision/Thumbnails.
Also note that you get an auto generated Alt text “Diego Maradona holding a ball” and a list of tags.
AI Cropping Image Field
The results
This is how the different cropped images will look like in the front end. Depending on your front end implementation, you will define different cropping sizes per breakpoints.
In this following implementation, I’m setting the image as a background and using the option to render the image URL as follows:
<img alt="a close up of a person wearing glasses"
src="https://vision.test.cm/-/media/project/vision/homepage/iatestimage.png?
w=600&h=600&smartCropping=true&hash=C2E215FE2CF74D4C8142E35619ABB8DE">
Note: Have a look at the AdvancedImageParameters:
OnlyUrl: If true it will just render the image URL (for being used as src in the img tag).
AutoAltText: If true, the alt text will be replaced by the one generated from Azure IA.
Width and Height: int values, to specify the cropping size.
Widths and Sizes: If set, it will generate a srcset image with for the different breakpoints.
SizesTag and SrcSetTag: Those are mandatories if when using the previous settings.
<img alt="a close up of a person wearing glasses" data-sizes="50vw,(min-width:
999px) 25vw,(min-width: 1200px) 15vw" data-
srcset="https://vision.test.cm/-/media/project/vision/homepage/iatestimage.png?
w=170&hash=1D04C1F551E9606AB2EEB3C712255651
170w,https://vision.test.cm/-/media/project/vision/homepage/iatestimage.png?
w=233&hash=DD2844D340246D3CF8AEBB63CE4E9397
233w,https://vision.test.cm/-/media/project/vision/homepage/iatestimage.png?
w=340&hash=3B773ACB5136214979A0009E24F25F02
340w,https://vision.test.cm/-/media/project/vision/homepage/iatestimage.png?
w=466&hash=424F7615FBECFED21F48DA0AE1FE7A5B 466w"
src="data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==">
GlassMapper extension
At last, an extension method has been added in order to get the media URL from the image field.
In my previous post I’ve shared a quick overview on the Azure Computer Vision API service and it’s implementation. If you didn’t read it yet, please do before proceeding to this reading!
With the basics and the CognitiveServices in place, let’s move forward and create a custom image field that uses this service to handle the image cropping, tagging and alt text description, all with AI.
I’ll be sharing the whole implementation in GitHub later and also a package plugin, but let’s get into the implementation details first.
Custom Image Field
The first step is to create the custom field, for doing that, go to the core DB and duplicate the /sitecore/system/Field types/Simple Types/Image field item. Let’s call it “AICroppedImage“.
Keep everything as it is except the assembly and class fields
AICroppedImage Class
For the implementation, we just decompiled the code from Sitecore.Kernel (Sitecore.Shell.Applications.ContentEditor.Image) and made all our needed customizations.
using Sitecore.Configuration;
using Sitecore.Data.Items;
using Sitecore.DependencyInjection;
using Sitecore.Diagnostics;
using Sitecore.Globalization;
using Sitecore.Resources.Media;
using Sitecore.Shell.Applications.ContentEditor;
using Sitecore.Web.UI.Sheer;
using System;
using System.IO;
using System.Text;
using System.Web;
using System.Web.UI;
using Microsoft.Extensions.DependencyInjection;
using System.Linq;
using Sitecore.Computer.Vision.CroppingImageField.Models.ImagesDetails;
using Sitecore.Computer.Vision.CroppingImageField.Services;
namespace Sitecore.Computer.Vision.CroppingImageField.Fields
{
public class AICroppedImage : Image
{
private readonly string ThumbnailsId = Settings.GetSetting("Sitecore.Computer.Vision.CroppingImageField.AICroppingField.ThumbnailsFolderId");
private readonly ICognitiveServices _cognitiveServices;
private readonly ICroppingService _croppingService;
public AICroppedImage(ICognitiveServices cognitiveServices, ICroppingService croppingService) : base()
{
_cognitiveServices = cognitiveServices;
_croppingService = croppingService;
}
public AICroppedImage() : base()
{
_cognitiveServices = ServiceLocator.ServiceProvider.GetService<ICognitiveServices>();
_croppingService = ServiceLocator.ServiceProvider.GetService<ICroppingService>();
}
protected override void DoRender(HtmlTextWriter output)
{
Assert.ArgumentNotNull((object)output, nameof(output));
Item mediaItem = this.GetMediaItem();
string src;
this.GetSrc(out src);
string str1 = " src=\"" + src + "\"";
string str2 = " id=\"" + this.ID + "_image\"";
string str3 = " alt=\"" + (mediaItem != null ? HttpUtility.HtmlEncode(mediaItem["Alt"]) : string.Empty) + "\"";
this.Attributes["placeholder"] = Translate.Text(this.Placeholder);
string str = this.Password ? " type=\"password\"" : (this.Hidden ? " type=\"hidden\"" : "");
this.SetWidthAndHeightStyle();
output.Write("<input" + this.ControlAttributes + str + ">");
this.RenderChildren(output);
output.Write("<div id=\"" + this.ID + "_pane\" class=\"scContentControlImagePane\">");
string clientEvent = Sitecore.Context.ClientPage.GetClientEvent(this.ID + ".Browse");
output.Write("<div class=\"scContentControlImageImage\" onclick=\"" + clientEvent + "\">");
output.Write("<iframe" + str2 + str1 + str3 + " frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" width=\"100%\" height=\"128\" " +
"allowtransparency=\"allowtransparency\"></iframe>");
output.Write("<div id=\"" + this.ID + "_thumbnails\">");
output.Write(GetThumbnails());
output.Write("</div>");
output.Write("</div>");
output.Write("<div>");
output.Write("<div id=\"" + this.ID + "_details\" class=\"scContentControlImageDetails\">");
string details = this.GetDetails();
output.Write(details);
output.Write("</div>");
output.Write("</div>");
}
protected override void DoChange(Message message)
{
Assert.ArgumentNotNull((object)message, nameof(message));
base.DoChange(message);
if (Sitecore.Context.ClientPage.Modified)
{
this.Update();
}
if (string.IsNullOrEmpty(this.Value))
{
this.ClearImage();
}
SheerResponse.SetReturnValue(true);
}
protected new void BrowseImage(ClientPipelineArgs args)
{
Assert.ArgumentNotNull((object)args, nameof(args));
base.BrowseImage(args);
if (Sitecore.Context.ClientPage.Modified)
{
this.Update();
}
}
protected new void ShowProperties(ClientPipelineArgs args)
{
Assert.ArgumentNotNull((object)args, nameof(args));
base.ShowProperties(args);
if (Sitecore.Context.ClientPage.Modified)
{
this.Update();
}
}
public override void HandleMessage(Message message)
{
Assert.ArgumentNotNull((object)message, nameof(message));
base.HandleMessage(message);
string name = message.Name;
if (name == "contentimage:clear")
{
this.ClearImage();
}
else if (name == "contentimage:refresh")
{
this.Update();
}
}
private void ClearImage()
{
if (this.Disabled)
{
return;
}
if (this.Value.Length > 0)
{
this.SetModified();
}
this.XmlValue = new XmlValue(string.Empty, "image");
this.Value = string.Empty;
this.Update();
}
protected new void Update()
{
string src;
this.GetSrc(out src);
SheerResponse.SetAttribute(this.ID + "_image", "src", src);
SheerResponse.SetInnerHtml(this.ID + "_thumbnails", this.GetThumbnails());
SheerResponse.SetInnerHtml(this.ID + "_details", this.GetDetails());
SheerResponse.Eval("scContent.startValidators()");
}
private string GetDetails()
{
var empty = string.Empty;
MediaItem mediaItem = this.GetMediaItem();
if (mediaItem != null)
{
var innerItem = mediaItem.InnerItem;
var stringBuilder = new StringBuilder();
var xmlValue = this.XmlValue;
stringBuilder.Append("<div>");
var item = innerItem["Dimensions"];
var str = HttpUtility.HtmlEncode(xmlValue.GetAttribute("width"));
var str1 = HttpUtility.HtmlEncode(xmlValue.GetAttribute("height"));
ImageDetails imageDetails;
using (var streamReader = new MemoryStream())
{
var mediaStrm = mediaItem.GetMediaStream();
mediaStrm.CopyTo(streamReader);
imageDetails = _cognitiveServices.AnalyzeImage(streamReader.ToArray());
}
if (!string.IsNullOrEmpty(str) || !string.IsNullOrEmpty(str1))
{
var objArray = new object[] { str, str1, item };
stringBuilder.Append(Translate.Text("Dimensions: {0} x {1} (Original: {2})", objArray));
}
else
{
var objArray1 = new object[] { item };
stringBuilder.Append(Translate.Text("Dimensions: {0}", objArray1));
}
stringBuilder.Append("</div>");
stringBuilder.Append("<div style=\"padding:2px 0px 0px 0px; text-align=left; \">");
var str2 = HttpUtility.HtmlEncode(innerItem["Alt"]);
var str3 = imageDetails.Description.Captions.FirstOrDefault()?.Text;
if (!string.IsNullOrEmpty(str3) && !string.IsNullOrEmpty(str2))
{
var objArray2 = new object[] { str3, str2 };
stringBuilder.Append(Translate.Text("AI Alternate Text: \"{0}\" (Default Alternate Text: \"{1}\")", objArray2));
}
else if (!string.IsNullOrEmpty(str3))
{
var objArray3 = new object[] { str3 };
stringBuilder.Append(Translate.Text("AI Alternate Text: \"{0}\"", objArray3));
}
else
{
var objArray4 = new object[] { str2 };
stringBuilder.Append(Translate.Text("Default Alternate Text: \"{0}\"", objArray4));
}
stringBuilder.Append("</br>");
var objArray5 = new object[] { str3 };
stringBuilder.Append(Translate.Text("Tags: \"{0}\"", string.Join(",", imageDetails.Description.Tags), objArray5));
stringBuilder.Append("</div>");
empty = stringBuilder.ToString();
}
if (empty.Length == 0)
{
empty = Translate.Text("This media item has no details.");
}
return empty;
}
private Item GetMediaItem()
{
var attribute = this.XmlValue.GetAttribute("mediaid");
if (attribute.Length <= 0)
{
return null;
}
Language language = Language.Parse(this.ItemLanguage);
return Sitecore.Client.ContentDatabase.GetItem(attribute, language);
}
private MediaItem GetSrc(out string src)
{
src = string.Empty;
MediaItem mediaItem = (MediaItem)this.GetMediaItem();
if (mediaItem == null)
{
return null;
}
var thumbnailOptions = MediaUrlOptions.GetThumbnailOptions(mediaItem);
int result;
if (!int.TryParse(mediaItem.InnerItem["Height"], out result))
{
result = 128;
}
thumbnailOptions.Height = Math.Min(128, result);
thumbnailOptions.MaxWidth = 640;
thumbnailOptions.UseDefaultIcon = true;
src = MediaManager.GetMediaUrl(mediaItem, thumbnailOptions);
return mediaItem;
}
private string GetThumbnails()
{
var html = new StringBuilder();
var src = string.Empty;
var mediaItem = this.GetSrc(out src);
if (mediaItem == null)
{
return string.Empty;
}
html.Append("<ul id=" + this.ID + "_frame\" style=\"display: -ms-flexbox;display: flex;-ms-flex-direction: row;flex-direction: row;-ms-flex-wrap: wrap;flex-wrap: wrap;\">");
var thumbnailFolderItem = Sitecore.Client.ContentDatabase.GetItem(new Sitecore.Data.ID(ThumbnailsId));
if (thumbnailFolderItem != null && thumbnailFolderItem.HasChildren)
{
foreach (Item item in thumbnailFolderItem.Children)
{
GetThumbnailHtml(item, html, mediaItem);
}
}
html.Append("</ul>");
return html.ToString();
}
private void GetThumbnailHtml(Item item, StringBuilder html, MediaItem mediaItem)
{
if (item.Fields["Size"]?.Value != null)
{
var values = item.Fields["Size"].Value.Split('x');
var width = values[0];
var height = values[1];
int w, h;
if (int.TryParse(width, out w) && Int32.TryParse(height, out h) && w > 0 && h > 0)
{
var imageSrc = _croppingService.GenerateThumbnailUrl(w, h, mediaItem);
html.Append(string.Format("<li id=\"Frame_{0}_{1}\" style=\"width: {2}px; height: {3}px; position: relative; overflow: hidden; display: inline-block;border: solid 3px #fff;margin: 5px 5px 5px 0;\">" +
"<img style=\"position: relative;position: absolute;left: 0;top: 0;margin: 0;display: block;width: auto; height: auto;min-width: 100%; min-height: 100%;max-height: none; max-width: none;\" " +
"src=\"{4}\"><img /><span style=\"position: absolute;" +
"top: 0;left: 0;padding: 2px 3px;background-color: #fff;opacity: 0.8;\">{5}</span></li>", this.ID, item.ID.ToShortID(), w, h, imageSrc, item.DisplayName));
}
}
}
}
}
We’re basically modifying the way Sitecore renders the field with some small customizations, basically to add the thumbnails generated by the Azure Cognitive service and also the Alt and Tags texts.
Ok, so that’s very much it, let’s deploy our code and see how it looks in the Sitecore Content Editor. The only thing you need to do next, is create a template and make use of the newly created “AI Cropped Image” field.
Et Voila! The image field is now rendering a few thumbnails that gives you an idea of the final results when rendering the image in the front-end. As you can see, it gives also some tags and a description (“Diego Maradona holding a ball”) used as alt text, everything coming from the Azure AI service, awesome!
Make the field rendered to work as an OOTB Sitecore image field
Next step, is to make sure we can still using the Sitecore helpers for rendering this field. For making this possible, we want to customize the Sitecore.Pipelines.RenderField.GetImageFieldValue processor. Same as before, we decompile the OOTB code from Sitecore.Kernel and we make our updates there. Then just patch the config like that:
Here, we just need to add the newly created field type (AI Cropped Image) as a valid image field type by overriding the IsImage() method.
using Sitecore.Diagnostics;
using Sitecore.Pipelines.RenderField;
namespace Sitecore.Computer.Vision.CroppingImageField.Pipelines
{
public class RenderAICroppingImageField : GetImageFieldValue
{
public override void Process(RenderFieldArgs args)
{
Assert.ArgumentNotNull((object)args, nameof(args));
if (!this.IsImage(args))
{
return;
}
var renderer = this.CreateRenderer();
this.ConfigureRenderer(args, renderer);
this.SetRenderFieldResult(renderer.Render(), args);
}
protected override bool IsImage(RenderFieldArgs args)
{
return args.FieldTypeKey == "AI Cropped Image";
}
}
}
Make it working with GlassMapper
Now, we can do some quick updates to GlassMapper as well so we can benefit from the glass helpers. Let’s add a custom field mapper, again after decompiling Glass.Mapper.Sc.DataMappers.SitecoreFieldImageMapper, we can just extend it to work in the same way with the newly introduced AI Cropping Image field.
using Glass.Mapper.Sc;
using Glass.Mapper.Sc.Configuration;
using Glass.Mapper.Sc.DataMappers;
using Sitecore.Data;
using Sitecore.Data.Fields;
using Sitecore.Data.Items;
using System;
using Sitecore.Computer.Vision.CroppingImageField.Fields;
namespace Sitecore.Computer.Vision.CroppingImageField.Mappers
{
public class AICroppedImageFieldMapper : AbstractSitecoreFieldMapper
{
public AICroppedImageFieldMapper(): base(typeof(AICroppedImage))
{
}
public override object GetField(Field field, SitecoreFieldConfiguration config, SitecoreDataMappingContext context)
{
var img = new AICroppedImage();
var sitecoreImage = new AICroppedImageField(field);
SitecoreFieldImageMapper.MapToImage(img, sitecoreImage);
return img;
}
public override void SetField(Field field, object value, SitecoreFieldConfiguration config, SitecoreDataMappingContext context)
{
var img = value as AICroppedImage;
if (field == null || img == null)
{
return;
}
var item = field.Item;
var sitecoreImage = new AICroppedImageField(field);
SitecoreFieldImageMapper.MapToField(sitecoreImage, img, item);
}
public override string SetFieldValue(object value, SitecoreFieldConfiguration config, SitecoreDataMappingContext context)
{
throw new NotImplementedException();
}
public override object GetFieldValue(string fieldValue, SitecoreFieldConfiguration config, SitecoreDataMappingContext context)
{
var item = context.Service.Database.GetItem(new ID(fieldValue));
if (item == null)
{
return null;
}
var imageItem = new MediaItem(item);
var image = new AICroppedImage();
SitecoreFieldImageMapper.MapToImage(image, imageItem);
return image;
}
}
}
We need also to create our custom field that inherits from Glass.Mapper.Sc.Fields.Image
using Glass.Mapper.Sc.Fields;
namespace Sitecore.Computer.Vision.CroppingImageField.Mappers
{
public class AICroppedImage : Image
{
}
}
Last step is to add the mapper to the create resolver from the GlassMapperSCCustom.cs
public static class GlassMapperScCustom
{
public static IDependencyResolver CreateResolver(){
var config = new Glass.Mapper.Sc.Config();
var dependencyResolver = new DependencyResolver(config);
// add any changes to the standard resolver here
dependencyResolver.DataMapperFactory.First(() => new AICroppedImageFieldMapper());
dependencyResolver.Finalise();
return dependencyResolver;
}
}
Custom Caching
In order to reduce the calls to the service, an extra layer of caching has been implemented. This cache, as any other Sitecore cache gets flushed after a publishing and the size can be easily configured through it’s configuration.
In my next post, I’ll be sharing the front end implementation, the full media request flow and the customizations needed to make it working in your site. Stay tuned!