3 Reasons To Modernize Your Data Estate

3 Reasons To Modernize Your Data Estate

There are major benefits awaiting organizations that modernize their data estate, including enhanced security and compliance, significant cost savings, and the ability to empower employees with business insights and advanced analytics capabilities.

With an out-of-date data platform, you could be…

  • Missing out on game changing cost benefits
  • Opening your business to potential vulnerabilities
  • Falling behind without modern analytics capabilities

What does it mean to ‘modernize’?

Modernization means taking advantage of the latest technologies to create transformational business benefits. With Microsoft, you have the flexibility to modernize your data estate in the way that’s right for your business:

  • Upgrade to SQL Server 2017: Move to a database with industry-leading performance and security—now on Windows, Linux, and Docker containers.
  • Modernize in the cloud: Move to Azure to get greater cost efficiencies, state-of-the-art security, and powerful data and analytics services – with fully-managed database-as-a-service (DBaaS) solutions.
  • Take a hybrid approach: Deploy across on-premises and the cloud to extend your infrastructure and improve business continuity.

Why modernize now?

Reduce cost & maximize your investment

Benefits of SQL Server 2017

  • #1 database in Online Transaction Processing (OLTP) price/performance
  • Unmatched Total Cost of Ownership (TCO) with everything built in
  • Get a comprehensive solution at one-fifth the cost of other self-service solutions with comprehensive BI
  • Reduce costs and create efficiencies with faster transactions with In-Memory OLTP and up to 100x faster analytics with in-memory Columnstore
  • Cut productivity losses and speed query performance without tuning using Adaptive Query Processing

Benefits of Microsoft Azure

  • Scale up or down quickly with pay-as-you-go, consumptionbased pricing and reduce licensing and hardware costs
  • 50% of businesses reported reaching ROI at a faster rate with cloud applications
  • Optimize costs for database workloads with different VM sizes
  • 75% lower costs on software licenses with Azure

Gain state-of-the-art, award-winning security

Benefits of SQL Server 2017

  • Least vulnerable of any enterprise database over last 7 years
  • Encrypt data at rest and in motion with Always Encrypted
  • Conceal sensitive data with Dynamic Data Masking
  • Control access to database rows with Row-Level Security

Benefits of Microsoft Azure

  • Meet a broad set of international and industry-specific compliance regulations
  • Continuous security-health monitoring across Azure, on-premises, and public clouds

Solve bigger problems with advanced business insights

Benefits of SQL Server 2017

  • Real-time operational analytics when you combine in-memory technologies
  • Visualize data anywhere with Mobile BI
  • Scale and accelerate machine learning by pushing intelligence to where the data lives with in-database R and Python analytics

Benefits of Microsoft Azure

  • Leading AI innovation to discover insights faster
  • Advanced machine learning capabilities at your fingertips

With the tech available to every business, modernization is now the norm. With a solution like SQL Server, you can reduce costs and maximize your investment, gain state-of-the-art, award-winning security, and solve bigger problems with advanced business insights.

If that sounds great but you’re unsure of how to proceed, it’s time to call the experts at Oakwood Systems Group at (314) 824-3000 or drop us a line in the form below. We’re industry professionals who have in-depth experience with helping businesses just like yours to plan, integrate, and execute with new technological solutions like SQL Server. Contact us today to find out more on how we can help your business.

Intelligence in the Cloud and on the Edge with Azure IoT

Intelligence in the Cloud and on the Edge with Azure IoT

Maximizing the impact of your IoT Proof of Concept Whitepaper. Download Here

The Internet of Things (IoT) is real and it’s here today, but to many of you, IoT may seem confusing and hard to understand how it applies to your business.

In this post, we hope to offer you further understanding on how Azure IoT could bring your business together in new, insightful ways – from generating new revenue streams, to increasing process efficiencies, and delivering better customer experiences.

What is IoT?

There’s a revolution underway that is positioning companies to take operational efficiency to new levels and inform the next generation of products and services. This revolution of course, is the Internet of Things (IoT).

IoT, however is not a technology evolution. It’s a business revolution, enabled by technology. With Microsoft’s long history of driving business success and digital transformation for customers, it’s no surprise that Microsoft is also focused on powering the business revolution through its robust Azure IoT suite of products.

Azure IoT is for every business—it powers digital transformation by unlocking insights from connected devices. You can transform those insights into action through powerful applications to create new revenue and business opportunities.

With Microsoft’s IoT platform spanning the cloud, OS and devices, we believe it is uniquely positioned to simplify the IoT journey so any customer—regardless of size, technical expertise, budget, industry or other factors—can create trusted, connected solutions that improve business and customer experiences, as well as the daily lives of people all over the world. The investment Microsoft announced just a couple of months ago will ensure that it continues to meet customers’ needs both now and in the future.

Microsoft’s IoT offerings today include what businesses need to get started, ranging from operating systems for devices, cloud services to control and secure them, advanced analytics to gain insights, and business applications to enable intelligent action. Microsoft has seen great traction with customers and partners who continue to come up with new ideas and execute them on their platform.

IoT isn’t one “thing”. It’s about an ecosystem of things or devices, the data and insights they generate, and the opportunity to take action based on the analysis of those insights.

  • Things: Build, manage, and monitor a network of physical objects by adding sensors and creating smart devices.
  • Insights: Collect data from your network and use advanced analytics to uncover new business insights and opportunities.
  • Action: Predict needs before they arise and act with precision based on unprecedented insights from your IoT network.

Why Azure IoT?

Microsoft has built a portfolio that supports the needs of all customers, and enables everyone to access the benefits of digital transformation.

Azure IoT Central is a fully managed SaaS solution that is best used when you need to get started quickly with minimal IoT experience. If your business is pursuing speed over customization, SaaS models could be the perfect fit for your IoT implementation needs. Organizations with fewer device models, more predictable scenarios, and limited IoT/IT capabilities can now reap the benefits of IoT through a SaaS approach. Those businesses that previously lacked the time, money, and expertise to develop connected products can now get started quickly with Microsoft IoT Central. Microsoft is leading the industry in providing a mature SaaS solution that meets common IoT implementation pain points.

azure iot lifecycle

Azure IoT solution accelerators is a customizable PaaS solution that is best used when you need a lot of control over your IoT solution. If your business is implementing IoT for connected operations, or have very particular customization requirements for your connected products, you can have the control you need with Azure IoT solution accelerators. Organizations with a large number of devices or device models, and manufacturers seeking connected factory solutions are examples of companies that can create highly customizable IoT solutions tailored to their complex needs.

In addition, Azure IoT Edge provides organization with the capacity to do local processing. When combined with a PaaS or SaaS solution, edge processing can offer faster calculations and reduce the cost of data sent to the cloud.

IoT Hub

IoT Hub is a managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. You can use Azure IoT Hub to build IoT solutions with reliable and secure communications between millions of IoT devices and a cloud-hosted solution backend. You can connect virtually any device to IoT Hub.

IoT Hub supports communications both from the device to the cloud and from the cloud to the device. IoT Hub supports multiple messaging patterns such as device-to-cloud telemetry, file upload from devices, and request-reply methods to control your devices from the cloud. IoT Hub monitoring helps you maintain the health of your solution by tracking events such as device creation, device failures, and device connections.

IoT Hub’s capabilities help you build scalable, full-featured IoT solutions such as managing industrial equipment used in manufacturing, tracking valuable assets in healthcare, and monitoring office building usage.

  • Establish bidirectional communication with billions of IoT devices: Use device-to-cloud telemetry data to understand the state of your devices and define message routes to other Azure services without writing any code. In cloud-to-device messages, reliably send commands and notifications to your connected devices and track message delivery with acknowledgement receipts. Device messages are sent in a durable way to accommodate intermittently connected devices.
  • Enhance security with per device authentication: Set up individual identities and credentials for each of your connected devices, and help retain the confidentiality of both cloud-to-device and device-to-cloud messages. To maintain the integrity of your system, selectively revoke access rights for specific devices as needed.
  • Provision devices at scale w/ IoT Hub Device Provisioning Service: Speed up your IoT deployment by registering and provisioning devices with zero touch in a secure and scalable way. IoT Hub Device Provisioning Service supports any type of IoT device compatible with IoT Hub.
  • Manage devices at scale with device management: IoT Hub scales to millions of simultaneously connected devices and millions of events per second to support your IoT workloads. IoT Hub offers several tiers of service to best fit your scalability needs.
  • Multi-language and open source SDKs: Use the Azure IoT device SDK libraries to build applications that run on your devices and interact with IoT Hub. Supported platforms include multiple Linux distributions, Windows, and real-time operating systems. Supported languages include: C, C#, Java, Python, Node.js.  IoT Hub and the device SDKs support the following protocols for connecting devices: HTTPS, AMQP, AMQP over WebSockets, MQTT, MQTT over WebSockets.

Azure IoT Edge

Azure IoT Edge is a fully managed service that delivers cloud intelligence locally by deploying and running artificial intelligence (AI), Azure services, and custom logic directly on cross-platform IoT devices. You can run your IoT solution securely and at scale—whether in the cloud or offline.

Move cloud and custom workloads to the edge, securely: Azure IoT Edge moves cloud analytics and custom business logic to devices so that your organization can focus on business insights instead of data management. Enable your solution to truly scale by configuring your IoT software, deploying it to devices via standard containers, and monitoring it all from the cloud. With IoT Edge, your edge devices operate reliably and securely even when they’re offline or have intermittent connectivity to the cloud. Azure IoT device management automatically syncs the latest state of devices once they’re reconnected to ensure seamless operability.

Seamless deployment of AI and advanced analytics to the edge: IoT Edge allows you to deploy complex event processing, machine learning, image recognition, and other high-value artificial intelligence without writing it in-house. Run Azure services such as Functions, Stream Analytics, and Machine Learning on-premises. Create AI modules and make them available to the community.

Easily build AI at the edge with the AI Toolkit for Azure IoT Edge. Most data becomes useless just seconds after it’s generated, so having the lowest latency possible between the data and the decision is critical. IoT Edge optimizes for performance between edge and cloud while ensuring management, security, and scale.

Only a small fraction of IoT data acquired is meaningful post-analytics. Use services such as Azure Stream Analytics or trained models to process the data locally and send only what’s needed to the cloud for further analysis. This reduces the cost associated with sending all your data to the cloud while keeping data quality high.

Configure, update and monitor from the cloud: Azure IoT Edge integrates seamlessly with Azure IoT solution accelerators to provide one control plane for your solution’s needs. Cloud services allow users to:

  • Create and configure a workload to be run on a specific type of device.
  • Send a workload to a set of devices.
  • Monitor workloads running on devices in the field.

Compatible with popular operating systems: Azure IoT Edge runs on most operating systems that can run containers.

Code symmetry between cloud and edge for easy development and testing: IoT Edge holds to the same programming model as other Azure IoT services; for example, the same code can be run on a device or in the cloud. IoT Edge supports OS such as Linux and Windows, and languages such as Java, .NET Core 2.0, Node.js, C, and Python, so you can code in a language you know and use existing business logic without writing from scratch.

Secure solution from chipset to the cloud: Intelligent edge devices face security threats ranging from physical tampering to IP hacking. IoT Edge is designed for security that extends to different risk profiles and deployment scenarios, and offers the same protection you expect from all Azure services.

Azure Time Series Insights

Azure Time Series Insights is a fully managed analytics, storage, and visualization service for managing IoT-scale time-series data in the cloud. Instantly explore and analyze billions of events from your IoT solution.

IoT scale time-series data store: Time Series Insights manages the storage of your data. At its core, Time Series Insights has a database designed with time series data in mind. Because it is scalable and fully managed, Time Series Insights handles the work of storing and managing events. To ensure data is always easily accessible, it stores your data in memory and SSD’s for up to 400 days. If you’re building an application, either for internal consumption or for external customers to use, Time Series Insights can be used as a back-end for indexing, storing, and aggregating time series data.

Schema-less store, just send data: Today, many organizations are limited by siloed data that’s difficult to compare onsite in one location, let alone many. With Time Series Insights, you now have a view of your time-series data across all your locations. It’s built for IoT-scale data so that you can visualize and interact with billions of streams of sensor data from all your connected things.

Easy IoT Hub connection: Azure Time Series Insights requires no up-front data preparation. Connect to millions of events in your Azure IoT Hub or Event Hub in minutes. Once connected, visualize and interact with sensor data to quickly validate your IoT solutions. You can interact with your data without writing code.

Store, query and visualize billions of events: Time Series Insights provides a query service, both in the TSI explorer and by using APIs that are easy to integrate for embedding your time series data into custom applications. You can interactively query billions of events in seconds – on demand.

Get near real-time insights in seconds: Get more value out of your time-series data with storage, analysis, and visualization, all in one place. Time Series Insights ingests hundreds of millions of sensor events per day and makes up to 400 days’ worth of time-series data available to query within one minute to empower quick action. Gain deeper insights into your sensor data by spotting trends and anomalies fast, which allows you to conduct root-cause analyses and avoid costly downtime. Plus, unlock hidden trends by cross-correlating discrete data and viewing real-time and historical data simultaneously.

Build apps using Time Series Insights APIs: Integrate Azure Time Series Insights data into your existing applications or create new custom solutions with Time Series Insights REST query APIs. Adding Time Series Insights data into existing workflows will allow you to get more out of your time series data, and when you use this data to build custom solutions for your users, you’ll provide more value to your partners.

Azure IoT Central

Azure IoT Central is a fully managed global IoT SaaS (software-as-a-service) solution that makes it easy to connect, monitor, and manage your IoT assets at scale. It allows you to bring your connected products to market faster while staying focused on your customers and it reduces the complexity of IoT solutions because it doesn’t require cloud solution expertise.

azure iot central

Fully hosted and managed by Microsoft: Azure IoT Central applications are fully hosted by Microsoft, which reduces the administration overhead of managing your applications.

As an operator, you use the Azure IoT Central application to manage the devices in your Azure IoT Central solution. Operators can perform tasks such as:

  • Monitoring the devices connected to the application.
  • Troubleshooting and remediating issues with devices.
  • Provisioning new devices.

No cloud development expertise required: You can build production-grade IoT applications in hours, without worrying about managing infrastructure or hiring developers with specialized skills. Reduce the complexity of customizing, deploying, and scaling an IoT solution. Bring your connected solutions to market faster—while you stay focused on your customers.

Device connectivity and management: Easily build and configure your IoT solution using Azure IoT Central without cloud development expertise. The easy-to-use interface makes it simple for you to connect, manage, and control access to millions of connected products remotely, throughout their lifecycle.

Monitoring rules and triggered actions: To monitor and manage the devices effectively, users can define the different types of measurements emitted by it and displayed by the application. Microsoft IoT Central supports measurements types such as telemetry including device-emitted numeric values, often collected at a regular frequency (e.g. temperature), events including device-emitted numeric or non-numeric values generated on the device, with no inferable relationship over time (e.g. button press and error code), and state including device-emitted numeric or non-numeric values which defines the state of a device or one of its parts and maintained until the state change is informed by the device (e.g. Engine ON). 

User roles and permissions: Roles enable you to control who, within your organization, can perform various Azure IoT Central tasks.

Analytics, dashboards and visualization: Microsoft IoT Central integrates Azure Time Series Insights – a fully managed analytics, storage, and visualization service for managing IoT-scale time-series – to enable users to explore and analyze billions of events streaming simultaneously from devices deployed all over the world. Microsoft IoT Central provides massively scalable time-series data storage and several ways to explore data, making it super easy to explore and visualize millions of data points simultaneously, conduct root-cause analysis, and to compare multiple sites and assets. Within an application, time-series visualization is available for a single device, for a Device Set – with the ability to compare multiple devices – and as a multi-purpose Analytics tool.

IoT Solution Accelerators

IoT solution accelerators are a collection of complete, ready-to-deploy, IoT solutions that implement common IoT scenarios such as remote monitoring, connected factory, and predictive maintenance. When you deploy a solution accelerator, the deployment includes all the required cloud-based services along with any required application code.

azure iot accelerators

The solution accelerators are starting points for your own IoT solutions. The source code for all the solution accelerators is open source and is available in GitHub. You can also use the solution accelerators as learning tools before building a custom IoT solution from scratch. The solution accelerators implement proven practices for cloud-based IoT solutions for you to follow. The application code in each solution accelerator includes a dashboard that lets you manage the solution accelerator.

All the solution accelerators follow the same design principles and goals. They are designed to be:

  • Scalable: letting you connect and manage millions of connected devices.
  • Extensible: enabling you to customize them to meet your requirements.
  • Comprehensible: enabling you to understand how they work and how they are implemented.
  • Modular: letting you swap out services for alternatives.
  • Secure: combining Azure security with built-in connectivity and device security features.

Preconfigured solutions

Remote Monitoring: Use this solution accelerator to collect telemetry from multiple remote devices and to control them. Example devices include cooling systems installed on your customers’ premises or valves installed in remote pump stations.

Connected Factory: Use this solution accelerator to collect telemetry from industrial assets with an OPC Unified Architecture interface and to control them. Industrial assets might include assembly and test stations on a factory production line.

Predictive Maintenance: Use this solution accelerator to predict when a remote device is expected to fail so you can carry out maintenance before the predicted failure happens. This solution accelerator uses machine learning algorithms to predict failures from device telemetry. Example devices might be airplane engines or elevators.

You Can Trust Azure

Microsoft understands every company, organization, and industry has unique needs and requirements. This is why they’re continuing to invest in their global infrastructure to provide the scale and performance needed to bring applications closer to users, keep them running with robust resiliency features to better meet your local data residency and compliance needs.

Still Have Questions?

The professionals at Oakwood Systems Group, Inc. can help. We’d like to talk to you about how your organization can leverage the power of IoT!

Interested in a 212% ROI?

Interested in a 212% ROI?

When you think about switching from running and housing your SQL Server on-premises to Azure, the first thing that comes to mind is probably the infrastructure maintenance savings.

But what about hidden savings? Like efficiency savings for your IT team and database admins? Or the freedom to expand your capacity as soon as you need it?

Just last week we detailed that SQL Server 2008 and SQL Server 2008 R2 will no longer be supported by Microsoft starting in July 2019. Outside of the reasons stated above, here’s a few more on why this is an issue to be addressed sooner than later.

  1. Mitigate risks with platform security and compliance: There will be no access to critical security updates, opening the potential for business interruptions and loss of data.
  2. Upgrade to better cost efficiency: Maintaining legacy servers, firewalls, intrusion systems, and other tools can get expensive quickly.
  3. Modernize to innovate: Grow your environments with data, analytics and the cloud.

In a recent financial impact study by Forrester, they detail all of the potential savings, costs, frustrations, and advantages associated with switching to the Azure cloud and the results are exciting!

At Oakwood Systems Group, Inc., we want what is best for your business. Contact us to learn more.

Drive secure and scalable business success with a modern data platform

Drive secure and scalable business success with a modern data platform

If you really want to make a difference in your organization, you can start by modernizing your IT infrastructure. That’s what Kyle began with when he started to work for the City of Corona. He worked hard to help the citizens of Corona by moving the city’s IT services to Microsoft 365 and Azure. This allowed the Corona government to provide real-time data and insights for their traffic management professionals; enable better communication with first responders via mobile devices; and connect with the community.

Editing in Microsoft Teams and being able to access features from a mobile platform brought huge time savings and removed redundant efforts among the staff, saving time and resources. With Azure, the city can now access and analyze data quickly, so they can respond to the changing environment faster and with greater accuracy.

Thanks to Kyle and Microsoft, Corona’s shift to a modern IT platform helps them to rapidly meet the changing needs of the city and its employees.

Send Oakwood Systems Group, Inc. an email today to find out how we can help you and your organization shift to a modern business platform with Microsoft.

Why “Backup” is Becoming a Dirty Word

Why “Backup” is Becoming a Dirty Word

A very interesting topic we like to chat about is backup protocols.  Are you excited?  Here is what happened.  A robust conversation took place this morning on the topic of “backup,” after one of our clients put out a call for help.  They can no longer complete their backups during the agreed-upon backup window, and wanted recommendations on storage vendors who might be able to help address the problem.  Someone in our conversation said, “they have an infrastructure that will no longer support their needs.”  This is where it got interesting.

How Important Is Backup?

The response to that comment was “Wrong.  They have an infrastructure that will no longer support their protocols.”  Unfortunately, most organizations are still laboring under backup protocols that were developed twenty years ago, and while the capabilities and platforms of IT have changed substantially in that time, backup protocols have not.

Disaster Recovery

They need to.  The concept of “backup” needs to go away, and the conversation needs to morph to disaster recovery and business continuity – equally old terms but more relevant now than ever.  Virtualization, private cloud, and public cloud offerings have changed the mechanics of how IT delivers services to the business.  The agility of new capabilities creates new opportunities for disaster recovery and business continuity.  Device-based backup decisions cost organizations time and money.  Consider them a small part of a bigger picture.  We're not saying to toss out the tapes; tape still has its place, but it is not the best choice for most DR and BC scenarios.  Users are demanding more, and we are demanding more of our users, including 24/7 access to them, which means they need 24/7 access to corporate information.


The short answer is that this is a much longer conversation.  Why are you backing up and what is it?  Have you thought about what retrieval requirements exist?  Such as, what needs to be online, versus near-line, or could be off-line?  The compliance issues that are driving some decisions?  The SLAs exist between IT and the business?  What do our users need?  If these questions sound familiar, you're right; they are the same questions your storage vendor asked you before you bought the last backup device (I know; I used to ask those questions myself).  The answers are multi-layered.  Can you spin up a new instance of an application in five minutes?  Don't worry about backing it up.  How can you leverage cloud services?  How many different places are housing critical data?  What does your user community require?

Yes, it's a bigger conversation.  Forget about backup.  It is a mechanical process that is no longer relevant.  Think about disaster recovery.  Business continuity.  Continuous availability, data security, and recovery, and configuration integrity.  Think about creating an always-on environment to match our always-on culture.  Think about creating strategic value for the business.  Today's DR and BC solutions will likely free up a lot of time.  Create more innovation within the business.  Or golf more.  Your call.

Next Steps

Review our case studies and engagements where we helped companies just like yours solve a variety of business needs.

About Oakwood

Since 1981, Oakwood has been helping companies of all sizes, across all industries, solve their business problems.  We bring world-class consultants to architect, design and deploy technology solutions to move your company forward.   Our proven approach guarantees better business outcomes.  With flexible engagement options, your project is delivered on-time and on budget.  11,000 satisfied clients can’t be wrong.

The OpsMan Alert Connector and An Often Over-Looked Step

The OpsMan Alert Connector and An Often Over-Looked Step

If you have both Operations Manager and Service Manager in your environment it can be very useful to set up both the Configuration Item (CI) and the Alert Connector. As a result, the CI connector will help build your CMDB, and the Alert Connector will sync your OpsMan alerts to Service Manager, creating incidents for your service desk to work from.

Whats The Purpose?

Setting both of these connectors up within the Service Manager console is a simple, straightforward task. However, it’s all too common to see that alert connector fail to sync once created. As a result, you won’t know why you’ll just keep looking at it saying to yourself that it looks right.

It’s a common mistake to forget that another step to setting up the Alert Connector is to configure a subscription for the connector inside the OpsMan console. It’s not difficult.

In the OpsMan console, on the Administration Pane, click on Internal Connectors nested under Product Connectors. As a result, you will see an Alert Sync object with the name of your connector. Right-click on it and select properties, then click Add to set up your subscription.

Wait a few minutes, and check the status of the Alert Connector in the Service Manager console again.

Next Steps

Review our case studies and engagements where we helped companies just like yours solve a variety of business needs.

About Oakwood

Since 1981, Oakwood has been helping companies of all sizes, across all industries, solve their business problems.  We bring world-class consultants to architect, design and deploy technology solutions to move your company forward.   Our proven approach guarantees better business outcomes.  With flexible engagement options, your project is delivered on-time and on budget.  11,000 satisfied clients can’t be wrong.

SCSM 2012 – How-To: Relationship Subscriptions

SCSM 2012 – How-To: Relationship Subscriptions

When performing Service Manager implementations, a near-universal request from clients is to have a notification email triggered upon a work item’s assignment changing. As a Service Manager admin, you may or may not already be aware that this functionality is present within Service Manager, however, is unable to be created from within the GUI as of the 2012 release. Instead, it requires a number of changes to a notification subscription which can only be exacted via directly editing the notification subscription’s XML. There are a number of blog posts out there on the internet already about how to go about making the necessary changes, however one thing each of these has in common is that the solutions are very much environment-specific, requiring the lookup of the appropriate notification template GUID, unique to each environment, in order to work.

My co-worker and I decided that we wanted to find a way to present an all-in-one management pack which could be imported into any environment without any need for looking up a GUID and editing XML. With a bit of experimentation, we were successful. I’m writing this blog post in order to make our management pack available for others as well as detailing what was needed in order to make this a true all-in-one solution. Additionally, the majority of the other blogs about relationship-type notification subscriptions, as of the time of this writing, contain outdated XML code from the Service Manager 2010 release. This post will also show the 2012 workflow XML with the updated notification assembly. Let’s dive in!

To begin with, there are a few considerations from Service Manager’s architecture that we had to keep in mind. Mainly, that for a true all-in-one solution, any notification templates used by our subscriptions would also need to be included. Since notification templates will naturally need to remain editable, and unsealed management packs are unable to be referenced, this meant our notification templates would need to be stored within the same management pack as our subscriptions. So the first step was to create a new management pack in the console and create notification templates for each of the planned subscriptions.

Next, “dummy” subscriptions were created via the Create Notification Subscription Wizard. Since the plan was to convert them into relationship-type subscriptions via XML anyway, the criteria were left blank. A notification template and a target relationship user were required to be selected in order to save the subscription, though they would be changed in the XML also. The real reason to create the subscription first in the console was simply to establish the XML framework (as well as providing a source for screenshots). Once we’d made “dummy” subscriptions for each of our planned subscriptions, the MP was exported.

The XML generated by the Notification Subscription wizard for our Incident Assigned To User subscription looked like this:

Original Subscription

There are two main components to a notification subscription – the Data Source and the Write Action. The Data Source element defines the subscription’s trigger – the matching criteria for which the CMDB is monitored in order to trigger the Write Action. The Write Action is the action performed by the workflow. It is a Windows Workflow Foundation XML which passes parameters from Service Manager into the defined assembly file which is what puts together the components and sends the email to the selected recipient(s).

By default, all notification subscriptions created in the console are what is known as an Instance Subscription. This type of subscription monitors the CMDB for any cases where an instance (individual object) of the selected class (in this case, the Incident class, identified by its GUID) matches the selected criteria. When a match is found, it triggers the Write Action. The limitation to Instance Subscriptions is that they look purely at property values and changes made to property values; they do not look at the relationships to related objects. As such, while a property value change on a related object can be used to trigger an instance subscription, the change of one related object to another (even with different property values) does not trigger the Instance Subscription.

Here is where the other subscription type, the Relationship Subscription, comes into play. The element was removed and replaced with the following code:

Relationship Subscription

A Relationship Subscription is unaware of object properties, such as “Username”, “ID”, “Status”, etc. Instead, what it looks for is a change in the related object on a given relationship class. Specifically, when a new object is added via the targeted relationship. There are three elements to a Relationship Subscription:

• RelType = The relationship class which the subscription should monitor.
• SourceType = The specific class (or child class) which is the Source defined in the relationship.
• TargetType = The specific class (or child class) which is the Target defined in the relationship.

In this example, we’re looking at the Assigned To User relationship, known internally as “System.WorkItemAssignedToUser”. Here’s what the relationship definition looks like in the System.WorkItem.Library management pack:

Work Item Assigned To User Relationship

As you can see, the Source is “System.WorkItem” and the Target is “System.User”. Because the Incident class (“System.WorkItem.Incident”) descends from the Work Item class in the Service Manager data model, it also inherits the “System.WorkItemAssignedToUser” relationship. As such, we can use the Incident class as the Source within our Relationship Subscription to limit this subscription and have it only trigger when the relationship changes between the Incident and User classes.

(For those who are unfamiliar with management pack XML, the “WorkItem!”, “CoreIncident!”, and “System!” components are aliases which define in which separate management pack the specified element is defined. Each of these referenced management packs and their aliases is defined in the top section of the management pack, under “References”.)

The same structure can be used to create a Relationship Subscription for any relationship in the program, even custom relationships. Simply enter the appropriate RelType, SourceType, and TargetType entries for the desired relationship and classes, just as we’ve done in the example here.

Because the Relationship Subscription code pulls information based upon class names rather than GUIDs, the code for a given relationship is the same regardless of the Service Manager environment. The Write Action, however, is where environment-specific customizations usually need to be done in order to define which notification template the workflow should use when sending an email. Here is a close-up on the default Write Action configuration in Service Manager 2012:

Write Action (GUID)

There are several components to the Write Action:

• AssemblyName = Lists the DLL which contains the Windows Workflow being triggered.
• WorkflowTypeName = The name of the specific workflow being triggered.
• WorkflowParameters = Used to define the workflow’s inputs and set data values for those inputs.

In Service Manager 2012, the workflow used for Notification Subscriptions is called “Microsoft.EnterpriseManagement.Notifications.Workflows.SendNotificationsActivity”, and it can be found in the Microsoft.EnterpriseManagement.Notifications.Workflows.dll.

There are five parameters which Service Manager passes into the workflow engine for this type of workflow. The first three, “SubscriptionID”, “DataItems”, and “InstanceIds”, are populated dynamically by the data source. This is done via the $Data and $MPElement variables. These are able to dynamically enter data based on the given trigger and their usage is explained on TechNet here: http://msdn.microsoft.com/en-us/library/hh964966.aspx.

The final parameter, “PrimaryUserRelationships”, is also dynamically populated via the undocumented $Context variable and lists the target user relationship to use when selecting which user to notify via email.

The fourth parameter, however, called “TemplateIds”, lists the GUID of the notification template to use when formatting the email. Because content in unsealed management packs is given a randomly generated GUID upon creation or import, this value will be unique in every separate Service Manager implementation. This is why the general instruction for configuring relationship subscriptions is to simply look up the GUID of the appropriate notification template in your environment and update the XML appropriately.

As stated above, our desire was to create a management pack containing these relationship subscriptions which would require zero XML editing in order to accelerate deployment. As such, the “TemplateIds”’s use of a GUID poses two problems for us – first, because the GUID will be different everywhere, it is impossible to enter a GUID and have the management pack be imported into a new environment without updating XML. Second, because our notification templates are located within the same management pack as our subscriptions, they won’t even have GUIDs assigned to them yet and thus there is no way to look up the appropriate GUID for the job.

This “problem” can be resolved by use of the same variable notation used above – the $MPElement variable. Because our notification templates are located in the same management pack, we have no issues with referencing them. Instead, all we need to do is enter the appropriate $MPElement variable referencing our notification template’s element ID. The result is this:

Write Action (MP Element)

Note that under “TemplateIds”, the GUID has been replaced with “$MPElement [Name=’NotificationTemplate.IncidentAssignedToUser’]$”. This is the name of the notification template as found further down the same management pack. Here’s what the template looks like:

Notification Template

With the $MPElement entry and the local notification template, this can now be imported into any Service Manager 2012 environment and work immediately without any XML editing. The final, complete code for the relationship subscription looks like this:

Finished RelSub

We went ahead and configured separate relationship subscriptions and notification templates within our management pack for the following relationship changes:

• Incident Affected User
• Incident Assigned To User
• Incident Primary Owner
• Service Request Affected User
• Service Request Assigned To User
• Change Request Assigned To User
• Release Record Assigned To User
• Problem Assigned To User
• Activity Assigned To User

Each subscription is configured to be disabled upon import, ensuring that no notifications are inadvertently sent while notification templates are being customized or due to relationship changes for which the client does not desire notifications. The management pack is unsealed so that notification templates can be customized however is appropriate for the client, though the basic information is included within each template for a quick get-up-and-go project.

Next Steps

Review our case studies and engagements where we helped companies just like yours solve a variety of business needs.

About Oakwood

Since 1981, Oakwood has been helping companies of all sizes, across all industries, solve their business problems.  We bring world-class consultants to architect, design and deploy technology solutions to move your company forward.   Our proven approach guarantees better business outcomes.  With flexible engagement options, your project is delivered on-time and on budget.  11,000 satisfied clients can’t be wrong.

Microsoft Deployment Toolkit (MDT) 2012

Microsoft Deployment Toolkit (MDT) 2012

Using MDT 2012 for imaging is a great solution on your System Center Configuration Manager (ConfigMan) 2012 journey.  How can the free tool called Microsoft Deployment Toolkit help me to implement ConfigMan 2012?  Read more and let's get to work.

Microsoft Deployment Toolkit

MDT 2012 has a nice feature that allows you to create an Image Deployment Share and then port the artifacts into ConfigMan 2012 and Operating System Deployment (OSD) will use the items you created in MDT to deliver images.  Thus saving you from wasting time and effort learning a new tool, deploying images then having to redo all of that effort when OSD is right around the corner.

MDT 2012 Installation

Download here

You can also find Print Ready Documentation, What’s New Document and Release Notes.

Here is a link http://technet.microsoft.com/en-us/solutionaccelerators/dd407791.aspx

Installing MDT 2012

Installing MDT 2012 is not difficult, the big problem comes when you begin to use the tool and it says you must install some prerequisite applications.  MDT gives you an interface to aid (somewhat) in installing these helper tools and applications.  I think that MDT is a great product for imaging and deployment of images – this installation applet, however, is a bit difficult to work with.  Here is what I have found to do to make it a bit easier: The applications and tools that MDT needs to function are Microsoft XML 6.0, Windows Automated Installation Kit (x86 and x64), Microsoft Assessment and Planning (MAP) Toolkit 6.5, USMT 3.0.1 (x86 and x64), Microsoft Application Compatibility Toolkit 5.6, Office Environment Assessment Tool 2010 U1, Office Migration Planning Manager, and Microsoft Security Compliance Manager.  Go to Information Center – Components.  In the top pane you will see the tools available for download listed, when you highlight the tool in the lower pane you will see a download button.  Clicking the download button starts the download.  Once the download is complete, you will see the tool listed in the middle pane (Downloaded).

Highlight in the center pane and in the lower pane you will see either an install button or a browse button.  Clicking browse will take you to the download folder and you can double click on the installation or manipulate the tool from there.  The install button will install it on the MDT Server.  You will notice that the application may appear in all three panes (available for download, downloaded, and installed), and sometimes they won’t move down the panes.  This is okay as you will know that all is well if it lets you create a deployment share.  Every so often (the console will alert you) go to Information Center – Components and in the far right pane you will see a check for updates.  This will go to the internet and pull down any updates to the tools or MDT from Microsoft.  The console will tell you – you haven’t checked for updates lately do you want to do this now?

About Oakwood

Since 1981, Oakwood has been helping companies of all sizes, across all industries, solve their business problems.  We bring world-class consultants to architect, design and deploy technology solutions to move your company forward.   Our proven approach guarantees better business outcomes.  With flexible engagement options, your project is delivered on-time and on budget.  11,000 satisfied clients can’t be wrong.

Designing Orchestrator Runbooks for Service Manager- Video and Slide Deck

Designing Orchestrator Runbooks for Service Manager- Video and Slide Deck


Why is Orchestrator Runbooks for Service Manager important to you and your business?  Have you been left wondering the best way to go about presenting your Orchestrator-automated processes to end users for Self-Service?  Do you have advanced request processes you are trying to manage via Service Manager?  Do you find you need the ability to automatically enable and disable steps based on user input?  Have you attempted to move data around in Service Manager?  Then discovered it's a puzzle of moving the data where you’d like it to go?  Are you curious about the possibilities you can achieve for adding a deeper level of process management to Service Manager by leveraging Orchestrator?

View Our Webinar

Click below and learn how to plan and integrate these two systems in order to better serve your organization’s needs.

Using a process defined in a map as a vehicle, we’ll discuss the considerations and how-to’s to fully leverage Orchestrator to automate and manage your Service Manager environment.

Next Steps

Review our case studies and engagements where we helped companies just like yours solve a variety of business needs.

About Oakwood

Since 1981, Oakwood has been helping companies of all sizes, across all industries, solve their business problems.  We bring world-class consultants to architect, design and deploy technology solutions to move your company forward.   Our proven approach guarantees better business outcomes.  With flexible engagement options, your project is delivered on-time and on budget.  11,000 satisfied clients can’t be wrong.

Don’t Be The Next Target | Protect your Active Directory

Don’t Be The Next Target | Protect your Active Directory

The recent Target compromise was a wake-up call for many firms, especially all of the Target stores across the U.S.  Let’s face it – IT security is often not at the top of the priority list for budgeting every year.  Your organization needs to be proactive in applying smart, effective security practices.  One area of particular interest should be your Active Directory environment.

Target Takes A Hit

Understandably, the biggest headlines about Target compromise focused on the theft of 40 million credit card numbers and other personally identifiable information.  One overlooked element in most media reports was the extent to which Target’s internal Active Directory environment may have factored into the attack.  Brian Krebs included an interesting tidbit in his reporting: like many firms, Target’s backend systems included extensive integration with Active Directory.

Active Directory Popularity

Active Directory has become ubiquitous, with Gartner estimating that between 90 and 95 percent of organizations have at least one AD forest deployed in their environment.  If an attacker managed to compromise even a single Active Directory account – and particularly one used by IT – the exposure to an organization would be huge.  At a minimum, the sheer amount of information in the directory would be useful for social engineering.

Oakwood Recommendations

Oakwood is advising our clients to be ever vigilant in securing their environments.  Microsoft offers a comprehensive list of AD security best practices.  There are a number of steps you can take right now to enhance the security of your Active Directory environment:

Stop Passing the Hash for Administrator Accounts

Active Directory’s Kerberos security model enables incredible functionality, including single sign-on and access to resources such as file shares.  However, if not properly secured, you might as well just hold the door open for intruders.  With enough access to even a single server or workstation, an intruder could pilfer the password hashes of any account that might have logged into the system – including Domain Administrator credentials.  Leveraging a pass-the-hash attack, the attacker could obtain full Domain Administrator access across a domain – with or without the actual password.  Recent enhancements in Windows 8.1 and Server 2012 R2 such as Kerberos Armoring and Protected Users could mitigate pass-the-hash and other sophisticated techniques used by intruders.

Control Applications in your Environment

Users may knowingly or unknowingly be exposing the organization to risk by executing untrusted applications.  In addition to deploying proactive anti-malware systems, two technologies that I highly recommend are AppLocker and Microsoft’s Enhanced Mitigation Experience Toolkit (EMET), which, in combination with Group Policy and System Center, can be used to protect against legal riskand prevent malware and zero-day threats from compromising your environment.

Design for Security, not Insecurity

Policy, architecture, and governance are critical elements in your security strategy.  Organizations need to think carefully about the benefits and costs of their Active Directory design.  Consolidation of non-revenue systems can significantly decrease the cost and complexity of IT operations, but revenue systems should be properly segmented to airgap critical data.

Be sure to also review your OU and GPO design.  Effective use of GPOs can significantly improve user experience and security – but overly complex GPO designs may create unexpected and unintended consequences.

Take Control of Administrative Rights

Evaluate your organization’s posture regarding administrative rights.  Administrative rights should be properly scoped and delegated based upon role and not merely the convenience.  Ensure that administrators limit usage of administrative rights to a separate account on a secured administrative workstation.

To the surprise and delight of many IT auditors, managing local administrator rights on servers and workstations alike can be centrally managed through the clever application of security groups, Group Policy Preferences, and your organization’s Identity Management or Access Management system.

Target Stores Warning Signs

Apply effective auditing, logging, and intrusion detection to your systems.  Defense in depth is critical.  Simply logging changes may not be enough to raise the alarm if malware sanitizes logs.  Also, if your operations staff fails to heed warning signs.  Beyond simply logging activity in your environment, implement proactive intrusion detection AND protection measures.  Unless your IT or security operations team catches on quickly (which sadly did not happen at Target), an attacker will have already executed their payload, deployed malware, and gained free rein to your customer’s data.

Next Steps

Most importantly, don’t wait and be the next Target.  Keep your organization and its customers safe by securing your Active Directory environment today.


Next Steps

Review our case studies and engagements where we helped companies just like yours solve a variety of business needs.

About Oakwood

Since 1981, Oakwood has been helping companies of all sizes, across all industries, solve their business problems.  We bring world-class consultants to architect, design and deploy technology solutions to move your company forward.   Our proven approach guarantees better business outcomes.  With flexible engagement options, your project is delivered on-time and on budget.  11,000 satisfied clients can’t be wrong.

Survive the Breakup With Your Old Infrastructure (known as Windows Server 2003)

Survive the Breakup With Your Old Infrastructure (known as Windows Server 2003)

Breakups can be hard. The longer you've been with someone, the harder it is to split. You're used to one another. You've integrated them into almost every aspect of your daily life, sometimes even to the point where you can't even remember what life was like before them. It's a scary thing to face, not knowing if you'll find someone else or if you do that person might not be good enough. It just won't be the same. Don't you wish there was a quick N easy guide on how to get over someone? Perhaps it would even guarantee that you'll find someone better?

Our Special Guide

That guide probably doesn't exist in real life (at least not to our knowledge), but it certainly does exist for breakups with old infrastructure, particularly Windows 2003. Not only is there a guide, but there are actual counselors who specialize in helping you move on to new and better things. If you're dealing with Windows Server 2003 end of support, not only do you need to break up with old infrastructure NOW, but you need to make sure you follow a strategic process to ensure you can fully move on without any major problems

Download the Survivor's Guide to Win2003 End of Life

Windows Server 2003 | Strategize

The end is near for Windows Server 2003, you must break up with your old infrastructure by July 14, 2015. 11 solid years of running mission-critical applications are about to go down the drain. Have no fear, you have plenty of options. But first, you need to do an inventory on what you have running on 2003 servers. This is a critical step in the moving on process. Take some time to reflect on what you have and what you need, then develop a strategy for migration. Without discovering your best, most important assets (or workloads), it will be impossible to find the right migration solution for your business.

Identify and Understand

As stated above, the first step in the migration process involves identifying which applications and workloads are most important to your business. These are the workloads you can't survive without. You might be so used to them, you might not even know which ones are running on Windows 2003 servers. If you don't take the time to carefully identify and understand which apps and workloads are mission critical and where they are running, this breakup could be a disaster. After you've cataloged all the configuration items in scope, you'll begin to the process of attributing them. Understanding elements such as criticality, business owner, vendor, version, and purpose are all vital to aide in the next phase of surviving the breakup.


Now you can begin the process of planning the end state of your workloads. Decide who you are and what you want, but keep an open mind. If you've been running Windows Server 2003, perhaps you think the most logical end state would be Windows Server 2008/2012. Not necessarily, every person and organization is different. Everyone has different needs and there isn't a one-size-fits-all solution. Consider your options before you take the leap – there's private cloud, IaaS, PaaS, and SaaS. Different workloads and applications will logically lead to certain targets, while others could offer the possibility of migration to one or more of these destinations. Don't be driven by emotion during this difficult breakup, consider factors such as speed, ease of migration, cost and desired functionality when making decisions.


Hopefully, you've gotten to this point with little resistance. If you have experienced some setbacks, that's completely normal and should be expected. It's hard to move on to new things, but the old way just won't work anymore. You need this change, you need to transform. Think of it as an opportunity to innovate your entire data center, a fresh start, and a new outlook.

Let's Start Anew

Regardless of how well you've held up to this point, you can't do it alone. Maybe it's so bad you need access to those specially trained counselors (you can reach them here). We've got you covered, should you need guidance during this breakup, attend our complimentary webinar or download the survival guide to Windows 2003 End of Life. Migrating away from Windows Server 2003 is an investment in your future, and there has never been a better time to begin the process of moving on. Take the next step to transform your data center.

Next Steps

Review our case studies and engagements where we helped companies just like yours solve a variety of business needs.

About Oakwood

Since 1981, Oakwood has been helping companies of all sizes, across all industries, solve their business problems.  We bring world-class consultants to architect, design and deploy technology solutions to move your company forward.   Our proven approach guarantees better business outcomes.  With flexible engagement options, your project is delivered on-time and on budget.  11,000 satisfied clients can’t be wrong.

Windows Server 2003: Know Your Migration Options (and the risks associated)

Windows Server 2003: Know Your Migration Options (and the risks associated)

On July 14, 2014, support for Windows Server 2003 will come to an end and organizations that have not taken action will be vulnerable to security breaches and, in certain industries, will risk their state of compliance. Companies must analyze the impact of such an event on their business, evaluate options, and make a decision based on risk and costs. There are several approaches for mitigating the risks associated with migration, ranging from taking no action to migrating all existing systems.

Windows Server 2003 End of Support Options

Below is a list of six different approaches to addressing the impending end of support for Windows Server 2003.

Each of these approaches has its benefits and challenges:

  1. Taking no action and/or isolation
  2. Paying for a custom support agreement
  3. Retiring certain assets using or running on the old OS
  4. Manual migration to a newer version of Windows Server
  5. Automated migration to a new version of Windows Server
  6. Migration to cloud-based infrastructure running a newer version of Windows Server

Taking No Action

Deciding to take 'no action' in response to the end of life event is, in itself, a decision. If an organization chooses this, it must acknowledge the potential scenarios for such a decision. It's important to take into account the discovery of a possible security vulnerability or system failure. Not having access to support could be a major disadvantage if either possible threats occur. Regardless, a careful analysis should be done to assess the risks and costs associated with this approach. For some organizations, the risks might not outweigh the costs and taking no action may be an appropriate response.

Isolation entails isolating older systems in a portion of its networks that is segmented or even disconnected from the larger company network and internet. This approach addresses the problem of new security vulnerabilities but does not solve the problems resulting from lack of access to Microsoft support.

If an organization has a copy of Windows Server 2003 that cannot be migrating or cannot be migrated in time, placing all such instances in Hyper-V virtualized containers might offer some mitigation. Virtualized containers isolate the Windows Server 2003 instances physically to potentially boost security. It may also make eventual migration of whatever workload is running on the server easier (it does not meet compliance standards). Using containers is a tactic for short-term mitigation and not a long-term strategy.

Paid Support

A custom support agreement is a paid offering that may be made available to customers subject to an approval process and documented migration plan. The price increases each year, and the program is only available for a limited time after extended support ends. Custom support agreements will have a major impact on cost which results in a greater impact on smaller organizations. When assessing whether this is the appropriate option or not, it's important to keep in mind:

  • Custom support is only offered to customers with active Premier support agreements. Companies that don't already have a Premier support agreement will incur a significant cost in establishing one.
  • Microsoft will only sign a custom support agreement with customers who have a fully documented migration plan in place.
  • With the combination of a Premier support agreement and a custom support agreement, customers will only have access to critical security patches. However, security patches rated as important and bug fixes are available for additional fees, which can escalate every year.

Custom support is not designed to be a permanent solution. It is best to go this route if an organization needs immediate support for mission-critical systems.


Another option that does not involve migration is to fully retire certain applications. Many applications running on Windows Server 2003 can simply be retired. Enterprises should assess application portfolios to determine which ones have reached the end of their useful lives and can be retired without major business impact. Assessing applications running on Windows Server 2003 with this same intent, can result in a workaround for migration. This approach can be used in combination with other options.

Manual Migration

For some applications, manual migration to a server running a newer version of Windows Server may be an option. However, many applications require more than a simple reinstallation for a successful migration. Revisiting the development process may be necessary to understand which applications depend on the OS functionality in Windows Server 2003. Developer modifications might only be an option for internally developed applications. Third-party application developers might be unwilling to reopen development for an application, or they've ceased operation (or became acquired over the years).

Keep in mind, even a partial manual migration effort improves the risk and cost factors of the previous approaches; no action, isolation and customer support.

Automated Migration

Automated migration involves using specialized tools that enable encapsulation of applications on Windows Server 2003 and migrating the applications to newer versions of Windows Server. Tool-based migration can reduce the aggregate cost of manual migration because fewer servers will require manual migration. Depending on the number of servers, the cost may be justified.

Manual and Automated Migration Targeting a Cloud Host

An organization that decides on a manual migration, automated migration or both has several options for applications:

  • Internal virtual or physical machines
  • Hosted virtual or physical machines
  • Cloud-based virtual machines

The availability of cloud-hosted virtual machines provides an additional cost-optimization factor. Eliminating physical servers in these scenarios will result in reduced capital expenditures. The decision criteria vary greatly, depending on organizations' needs and size.

For organizations actively looking to transform their existing environment to a cloud-based model, the migration of Windows Server 2003-based applications represents an opportunity to move a significant block of legacy functionality to the cloud.

Next Steps

Review our case studies and engagements where we helped companies just like yours solve a variety of business needs.

About Oakwood

Since 1981, Oakwood has been helping companies of all sizes, across all industries, solve their business problems.  We bring world-class consultants to architect, design and deploy technology solutions to move your company forward.   Our proven approach guarantees better business outcomes.  With flexible engagement options, your project is delivered on-time and on budget.  11,000 satisfied clients can’t be wrong.