Quantcast
Channel: Steve Rachui's Manageability blog – ConfigMgr/OpsMgr
Viewing all 42 articles
Browse latest View live

SCCM2007 - Peer DP functionality

$
0
0

I've been spending a fair amount of time of late working with SCCM2007 - which is currently available for download and evaluation.  As with any beta software, SCCM2007 is still subject to change - for prerequisites and download instructions see http://www.microsoft.com/smserver/evaluation/2003/smsv4.mspx

There are several new features in SCCM2007 and we will spend some time talking about them over the next few entries.

One of the new features is the addition of peer distribution point (PDP) functionality.  In short, PDP's are workstation based distribution points and are targeted to fit branch office scenarios where it would be helpful to have software content available locally, so as to avoid traversing small or busy WAN links, but it isn't practical to have a server class machine at the location.

Assigning software content to a PDP can be done just as you would for a standard DP - just select the PDP from the list of available distribution points when configuring the package.  In addition, it is also possible to have a PDP provisioned 'on demand'.  This means that if a client receives an advertisement to run software that should be run from the PDP, and that PDP does not have the content provisioned already, the PDP can be configured in the background by SMS to receive the software and make it available to requesting clients.

We will start to look at this in greater detail over the next few weeks.

-Steve


SCCM 2007 - Peer DP functionality - admin provisioning

$
0
0

The last entry began the discussion of peer DP's with an overview description of this new feature.  I thought for this entry we would spend a few lines discussing how a peer DP actually gets provisioned with content.

As mentioned in the last entry, it is possible for an administrator to proactively configure a peer DP to host content.  This is done just as it is for a standard DP - just add it to the package distribution point list.  But how does the peer DP actually get the content?

When a standard DP is provisioned the content is copied at that time to the distribution point and made available to clients. Provisioning a peer DP is a bit different.  In order to add content to a peer DP there must be at least one standard BITS enabled DP that either already has the content or is being provisioned with the content at the same time as the peer DP.  If this is not the case an error will be displayed to the administrator - the peer DP will still be added but no content will be provisioned until at least one standard BITS enabled DP is available with the content. 

In addition, content is not copied to the peer DP but, instead, policy is prepared and targeted to the peer DP letting it know there is content that needs to be downloaded.  When the peer DP checks for policy it will download this setting and store it locally.  Then, the peer DP component of the client will evaluate this policy and begin the download of the content at that time.  Once the content is downloaded the peer DP is available to serve the content to requesting clients.

Remember, a peer DP is a function that can be performed by any SCCM 2007 client - every client has the peer DP agent role.  Whether this role is active is determined by whether the client is configured as a peer DP site system.  Remember also that a peer DP does require interaction with BITS enabled DP's in order to download content.

We will discuss how 'on demand' provisioning works in the next entry.

-Steve

SCCM 2007 - peer DP functionality - on demand provisioning

$
0
0

The last two entries provided an overview of the peer DP role in SCCM 2007 and also discussed admin provisioning of a peer DP.  For this entry we will discuss 'on demand' peer DP provisioning.  It should be noted again that SCCM 2007 is still a beta product so these details remain subject to change.

'On demand' provisioning happens when a client is targeted with a deployment and none of the accessible DP's available to the client have the content available.  In such a situation, if there is a peer DP available to the client, that peer DP will be automatically added as a distribution point for the package.  In this way, the next time the client attempts to execute the deployment it should be available on the accessible peer DP.

Lets go a bit deeper into the process.  There are a few conditiona that must be true for 'on demand' provisioning to occur.

1.  There are no standard DP's the client can access that have the content available
         Note, this does NOT mean that no standard DP's have the content - it is required
         that at least one standard BIT's enabled DP is available with the content at the 
         site where the peer DP is located.  If this is not the case, the on demand request 
         will be delayed until the condition is met.

2.  There is at least one peer DP is protected and includes the clients boundary.

3.  The advertisement property controlling client fallback to non-protected distribution points
is enabled.  This property in described as follows in the advertisement GUI:  

         "If the client is in the boundaries of one or more protected distribution points, 
         allow the client to access the content only from the protected distribution points"

4.  The package must be configured to allow 'on demand' provisioning.

With all of this enabled the flow of events would be as follows....

-A package/advertisement are created.  The package is staged on at least one BITS enbled standard DP - protected or not.

-The advertisement is received by a targeted client.  The client attempts to find the content so it can run the deployment and makes a location request to the management point.

-The management point queries the database to determine a list of available distribution points that are available with the content and that the client can access.  Because the fallback flag is enabled on the advertisement, no distribution points are returned but the management point does determine that a peer DP is available in the protected boundaries of the client but just doesn't have the content.  Based on the the management point triggers addition of the peer DP to the distribution points list for the package.

-The client will receive a blank list of distribution points from this initial content request and the first attempt to run the distribution will fail.

-Behind the scenes the DP is added and policy is adjusted so the peer DP knows it has a package to download.  On the next policy evaluation cycle the peer DP learns of this policy and the software gets downloaded to the peer DP.

-The client tries again and again sends a contentlocation request to the management point.  This time the peer DP is returned to the client as an available source of the package and package execution continues.

Branch DP internals - SCCM 2007

$
0
0

Many customers have begun using the new Branch Distribution Point feature in SCCM 2007 - and it is fairly straight forward to setup.  But how does it actually work?  What changes have been made to accomodate the branch DP? Some required components are new and some are modified.  We will cover a few of these in the next couple of entries - we will start with Distribution Manager.

Distribution Manager - For standard DP's distribution manager is responsible for directly connecting to the distribution point, creating the DP shares if required and performing a file by file copy of package content to the destination directory.  If the target distribution point is at a child site, distribution manager is responsible for compressing the source content and creating the appropriate instructions to send the compressed content to the destination site where it will be copied to local DP's by the local distribution manager.

The branch DP is different because there is no content copied to it directly.  Rather, the branch DP pulls the content from a standard BITS enabled DP that is storing the content.  This pull is intiated when the branch DP receives notification (via policy) that content is targeted to it.  Distribution managers role in all of this is simply to trigger creation of the policy entries in the database that the branch DP will use to know what content it should provision.  A sample log entry showing this change in distribution manager functionality is below.

Distribution Manager Log Snips

Standard operation
Start adding package to server ["Display=\\STEVERACSMSBDD\"]MSWNET:["SMS_SITE=2K3"]\\STEVERACSMSBDD\... 
Will wait for 1 threads to end. 
Thread Handle = 3452 
Attempting to add or update a package on a distribution point. 
<Status message log entry omitted>
Established connection to ["Display=\\STEVERACSMSBDD\"]MSWNET:["SMS_SITE=2K3"]\\STEVERACSMSBDD\ 
The distribution point ["Display=\\STEVERACSMSBDD\"]MSWNET:["SMS_SITE=2K3"]\\STEVERACSMSBDD\ doesn't point to an existing path. 

Branch Distribution Point Operation
Start adding package to server ["Display=\\Testpeerdp1\"]MSWNET:["SMS_SITE=TOP"]\\Testpeerdp\...
DPID 3 - NAL Path ["Display=\\Testpeerdp\"]MSWNET:["SMS_SITE=TOP"]\\Testpeerdp\ is a PeerDP
Processing for Peer DP   <-----We know this is a peer DP and will handle the package differently
<Status message log entry omitted>
Successfully updated PeerDPPkgMap for DPID 3 and PkgID TOP00004.
Successfully inserted MachineID for DPID 3 into PeerDPResMapChg_Notif table.
Created policy provider trigger for ID TOP00004 <-----Here, distribution manager is handing off 
                                                                            the trigger file it received to 
the policy provider
                                                                            component for continued processing.

Distribution manager is triggered to begin operation by notification files.  These notification files are created by the SMS SQL Monitor component based on database triggers.  These files are zero length with filenames in the format of <package ID>.pkn.  These trigger files are the same regardless of standard DP operations or Branch DP operations.  For standard DP's, distribution manager processes the <package ID>.pkn and discards it.  In the case of Branch DP's distribution manager will process the file and then forward it to policypv.box which triggers policy provider to setup the required database policy to notify the Branch DP of the package that requires download.

Policy Provider Log Snip

Branch DP Processing
Detected changes in package TOP00004 <—Picking up <packageid>.pkn file
Looking for CIN files    <---Looking for any Configuration Item Notification files
Looking for software policy and policy assignments that should be removed...
Did not find any software policy or policy assignments that should be removed.
Looking for Peer DP package policy and policy assignments that should be removed...
Did not find any Peer DP package policy or policy assignments that should be removed. 
Looking for software policy and policy assignments that should be created...
Did not find any software policy or policy assignments that should be created.
Looking for Peer DP package policy and policy assignments that should be created...
<Status message omitted>
Successfully created policy {d85c2449-87b8-4bd0-ae5e-de7d3e3273bc} and policy assignment {7aee1882-3a3d-44f9-89c5-88e81943bc87} based on package TOP00004 <---Created required database policy for client

More information
For more information on the setup and operation of the Branch Distribution Point, see my article in the August 2007 edition of technet magazine - also available at the following link:

http://www.microsoft.com/technet/technetmag/issues/2007/08/BranchDP/default.aspx

I have also discussed branch DP operation modes and setup in a few previous blog entries.

Branch DP Internals - client side - SCCM 2007

$
0
0

The last post detailed required server side processing to prepare a package for distribution to the Branch DP.  Let's now take a look at what happens on the Branch DP to finally get the content copied to and ready for access by requesting clients.

The last step in server side Branch DP package processing is the creation of policy to notify the Branch DP there is work to do.  This is where we start the processing for this entry.  As we go through this discussion, bear in mind that the Branch DP is simply a component of the SCCM advanced client - so instructions to the Branch DP component come down to the system just like any setting would - through a policy retrieval cycle.

Before receiving policy for Branch DP processing it is assumed that the actual Branch DP functionality has been enabled (Branch DP's must be configured as site systems in order for them to be visible in the package DP list - in this post we assume the branch DP has already been enabled).  It is easy to verify the Branch DP is enabled and ready to receive a package.  Simply check the root\ccm\policy\machine\actualconfig WMI namespace.  Look at the Instances list in the CCM_DistributionPoint namespace.  If the system has been enabled as a Branch DP you will see an entry in the instance list called CCM_DistributionPoint.DummyKey=1.

For this post, we will assume that the Branch DP has been enabled previously.  When the SCCM client hosting the Branch DP performs its normal polling cycle (default ever hour) it will pick up the policy change prepared by the server to notify the Branch DP that it has a package pending download.  Once the policy has been downloaded to the client we will wait for the Branch DP agent policy cycle on the client to take place which will read the new policy in WMI and act to begin the download of the content from any available (meaning a DP that can be accessed and is not restricted due to protected boundaries) standard BITS enabled distribution point.  This is a BITS download and will honor any BITS settings configured in the BITS section of the computer client agent. When the download begins a temp folder is created on the client to store the content (this folder is not accessible to a user of the system).  If problems take place during download this folder will persist.  Assuming all goes well and once download is complete a standard distribution folder will be visible on the client.

Branch DP processing is recorded in the PeerDPAgent and associated logs as follows

PeerDPAgent.log
CPDPJobManager::OnPDPStatusTask
PDP Maintenance Message Body : <?xml version='1.0' ?><PDPScheduledMaintenance MessageType='PDPScheduledMaintenanceTask'> <PDPScheduledMaintenanceAction ActionType='Predefined'> <PDPScheduledMaintenanceActionID>{00000000-0000-0000-0000-000000000109}
                                                                                                Branch DP Schedule Token
</PDPScheduledMaintenanceActionID> <Description>PDP Scheduled Maintenance</Description> </PDPScheduledMaintenanceAction></PDPScheduledMaintenance >
CPDPJobManager::OnMaintainContent
Raising event:
[SMS_CodePage(437), SMS_LocaleID(1033)]
instance of PDPStartMaintenanceTaskAll
{
     ClientID = "GUID:EF0FBF55-0F42-4271-B742-BED81D895FA4";
     DateTime = "20060725194148.144000+000";
     MachineName = "STEVERACPEERDP1";
     ProcessID = 2164;
     SiteCode = "TOP";
     ThreadID = 3424;
};
Successfully submitted event to the Status Agent.
Raising event:
[SMS_CodePage(437), SMS_LocaleID(1033)]
instance of PDPMaintenanceTaskList
{
     ClientID = "GUID:EF0FBF55-0F42-4271-B742-BED81D895FA4";
     DateTime = "20060725194148.207000+000";
     MachineName = "STEVERACPEERDP1";
     PackageList = "";
     ProcessID = 2164;
     SiteCode = "TOP";
ThreadID = 3424;
};
Successfully submitted event to the Status Agent.
Client is not in native mode, internet facing is not supported.
CPDPJobManager::OnPolicyArrived
PDP_CreateJobData
Created Peer DP job {12AC27D9-5D92-4677-8F15-05372686E060} for package TOP00004
CPDPJob::EvaluateState                                  Job is created and processing begins
CPDPJob::PreprocessJob
Drive 'A:\' is not a fixed drive, ignoring.          Enumerate drives to see what is available
Drive 'D:\' is not a fixed drive, ignoring.
CPDPJob::CheckForPreStagedPkg
Checking C:\SMSPKGC$\TOP00004 for prestaged TOP00004 package  Check to see if package has been
Package TOP00004 has not been prestaged                                         manually copied to Branch DP.  If
Package TOP00004 in state 'Starting'.                                                  so we will use it.
CPDPJob::EvaluateState
CPDPJob::StartNewJob
CPDPJob::EvaluateDoNotDownloadFlag
Drive 'A:\' is not a fixed drive, ignoring.
CPDPJob::PrepareStagingDir
Temp dowload Path: 'C:\PDPD4DF.tmp'    Create our temporary download location for the package
Disconnected 0 users from directory C:\PDPD4DF.tmp
Temp staging directory for package TOP00004 is C:\PDPD4DF.tmp
CPDPJob::InvokeDownload
Calling DownloadContent, the type is 0
Raising event:
[SMS_CodePage(437), SMS_LocaleID(1033)]
instance of PDPDownloadStartedEvent
{
     ClientID = "GUID:EF0FBF55-0F42-4271-B742-BED81D895FA4";
     DateTime = "20060725194218.629000+000";
     MachineName = "STEVERACPEERDP1";
     PackageID = "TOP00004";
     ProcessID = 2164;
     SiteCode = "TOP";
     SourceVersion = 2;
     ThreadID = 1512;
};
Successfully submitted event to the Status Agent.
Package TOP00004 in state 'Downloading'.               Downloading the package - download is actually
CPDPJob::EvaluateState                                         handled by ContentTransferManager component
CPDPJob::ProcessProgress
Download complete for CTM job {5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}, downloaded KB 67 ß---Flags the CTM ID and complete status         GUID ID is the ID that will show up in ContentTransferManager log
CPDPJob::DownloadCompleted          and can be used to track progress with that component.
Package TOP00004 in state 'DownloadComplete'.       Download complete - package ready to use

ContentTransferManager.log
Starting CTM job {5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}CTM Job GUID - link between  current and
                                                                                                PeerDPAgent log

CCTMJob::EvaluateState(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}, State=Starting)
Attempting to persist location request for PackageID='TOP00004' and PackageVersion='2'
LSCreateRequestInWMI
Attempting to create Location Request for PackageID='TOP00004' and Version='2' LocationServices request

Succesfully created Location Request                                                                 being created
Persisted location request
In LSNapInitializeLocationFilter for request {B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}.    LocationServices
In CCCMNAPLocationHandler::IsSystemInQuarantineState.                                                request ID
System is not in quarantine state.
Attempting to send Location Request for PackageID='TOP00004'
LSCreateRequestMessageBody
Client is not in native mode, internet facing is not supported.
ContentLocationRequest : <ContentLocationRequest SchemaVersion="1.00"><Package ID="TOP00004" Version="2"/><AssignedSite SiteCode="TOP"/><ClientLocationInfo LocationType="SMSPackage" UseProtected="0" AllowCaching="0" BranchDPFlags="1" UseInternetDP="0"><ADSite Name="Default-First-Site-Name"/><IPAddresses><IPAddress SubnetAddress="0.0.0.0" Address="<obscured on purpose>"/><IPAddress SubnetAddress="<obscured on purpose>" Address="<obscured on purpse>"/><IPAddress SubnetAddress="0.0.0.0" Address="0.0.0.0"/><IPAddress SubnetAddress="2002:4135:4153:0000" Address="2002:4135:4153:0000:0000:0000:4135:4153"/></IPAddresses></ClientLocationInfo></ContentLocationRequest>
Created and Sent Location Request '{B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}' for package TOP00004                                               LocationServices request ID - link between current and LocationServices log
CTM job {5AF7E2FA-77F4-42FD-9E9F-31C885477BBF} entered phase CCM_DOWNLOADSTATUS_DOWNLOADING_DATA
Queued location request '{B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}' for CTM job '{5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}'. Log entry showing link between the CTM job ID and LocationServices request ID  
                                   W
e hand off to locationservices here.
CCTMJob::EvaluateState(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}, State=RequestedLocations)
Created CTM job {5AF7E2FA-77F4-42FD-9E9F-31C885477BBF} for user S-1-5-18
In CLSCallback::LocationUpdateEx
CTM dumping locations returned by Location Service: We have received data base from locationservices
Source: 'http://SMSServer/SMS_DP_SMSPKGC$/TOP00004/' Locality: Remote Version: 5430 Capability: <Capabilities SchemaVersion="1.0"/>
Source: '\\SMSServer\SMSPKGC$\TOP00004\' Locality: Remote Version: 5430 Capability: <Capabilities SchemaVersion="1.0"/>
CCTMJob::UpdateLocations({5AF7E2FA-77F4-42FD-9E9F-31C885477BBF})
CTM_NotifyLocationUpdate
CCTMJob::_PersistLocations
CCTMJob::_DeleteLocations
Persisted location 'http://SMSServer/SMS_DP_SMSPKGC$/TOP00004', Order 0, for CTM job {5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}
Persisted location 'file:\\SMSServer\SMSPKGC$\TOP00004', Order 1, for CTM job {5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}
Persisted locations for CTM job {5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}:
(REMOTE) http://SMSServer/SMS_DP_SMSPKGC$/TOP00004
(REMOTE) file:\\SMSServer\SMSPKGC$\TOP00004
CCTMJob::_GetNextLocation
In CDataTransferService::CDataTransferService
CTM job {5AF7E2FA-77F4-42FD-9E9F-31C885477BBF} (corresponding DTS job {19AAE742-DFC3-48DF-AE18-A07D5AD7EB8C}) started download from 'http://SMSServer/SMS_DP_SMSPKGC$/TOP00004'
In CDataTransferService::~CDataTransferService
CCTMJob::EvaluateState(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}, State=DownloadingData)
CTM job {5AF7E2FA-77F4-42FD-9E9F-31C885477BBF} successfully completed.
CCTMJob::EvaluateState(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}, State=Success)
CCTMJob::EvaluateState(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}, State=Complete)
CCTMJob::_Cleanup(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF})
CCTMJob::_Cleanup(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}) - Cancelling LS job {B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}
CCTMJob::_Cleanup(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}) - Cancelling DTS job {19AAE742-DFC3-48DF-AE18-A07D5AD7EB8C}
In CDataTransferService::CDataTransferService
spDTS->CancelJob(id), HRESULT=80040215 (e:\buildall\nts\sms\framework\ccmctm\util.cpp,742)
In CDataTransferService::~CDataTransferService
CCTMJob::_Cleanup(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}) - Deleting persisted locations
CCTMJob::_DeleteLocations
CCTMJob::_Cleanup(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF}) - Deleting persisted job
CRemoveJobFromGlobalState::Execute
CCTMJob::~CCTMJob(JobID={5AF7E2FA-77F4-42FD-9E9F-31C885477BBF})

LocationServices.log
Created filter for LS request {B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}. Locationservices gets request 
                                                                                                             from ContentTransferManager
LSGetSiteCodeFromWMI
LSGetSiteCodeFromWMI : Site code returned from WMI is <TOP>
LSGetADSiteName
Current AD site of machine is Default-First-Site-Name
IPv6 entry points already initialized.
DHCP entry points already initialized.
Adapter {C53880AD-BE51-4DD1-9B8E-6A1DF78CD296} is DHCP enabled. Checking quarantine status.
Adapter {7FD00A16-5286-4C84-BBFF-D2D7307C5EEF} is DHCP enabled. Checking quarantine status.
dwRetVal, HRESULT=80070002 (e:\buildall\nts\sms\framework\ccmutillib\ccmiputil.cpp,379)
Client is not in native mode, internet facing is not supported.
sHost.length()!=0, HRESULT=80040215
Client is not in native mode, internet facing is not supported.
LSGetSiteCodeFromWMI
LSGetSiteCodeFromWMI : Site code returned from WMI is <TOP>
LS Verifying message
CLSReplyLocationsTask::Execute
Processing Location reply message
ContentLocationReply : <ContentLocationReply SchemaVersion="1.00"><Sites><Site><MPSite SiteCode="TOP" MasterSiteCode="TOP" SiteLocality="FALLBACK"/><LocationRecords><LocationRecord><SMBPath Name="\\SMSServer\SMSPKGC$\TOP00004\"/><URL Name="http://SMSServer/SMS_DP_SMSPKGC$/TOP00004/"/><ADSite Name="Default-First-Site-Name"/><IPSubnets><IPSubnet Address="172.29.8.0"/><IPSubnet Address=""/></IPSubnets><Metric Value=""/><Version>5430</Version><Capabilities SchemaVersion="1.0"/><ServerRemoteName>SMSServer</ServerRemoteName><DPType>SERVER</DPType></LocationRecord></LocationRecords></Site></Sites></ContentLocationReply>
LSGetContentPoints
Job requires both local and remote locations
HTTP download has been specified
SMB download has been specified
Request is for a PeerDP download
LSGetADSiteName
Current AD site of machine is Default-First-Site-Name
IPv6 entry points already initialized.
DHCP entry points already initialized.
Adapter {C53880AD-BE51-4DD1-9B8E-6A1DF78CD296} is DHCP enabled. Checking quarantine status.
Adapter {7FD00A16-5286-4C84-BBFF-D2D7307C5EEF} is DHCP enabled. Checking quarantine status.
dwRetVal, HRESULT=80070002 (e:\buildall\nts\sms\framework\ccmutillib\ccmiputil.cpp,379)
Invoking LSInvokeCallback with list of content locations
LSInvokeCallback
In CLSCallback::CLSCallback
Calling back with the following distribution points
Distribution Point='http://SMSServer/SMS_DP_SMSPKGC$/TOP00004/', Locality='REMOTE',
DPType='SERVER', Version='5430', Capabilities='<Capabilities SchemaVersion="1.0"/>' LocationServices
7/25/2006 2:42:18 PM 1192 (0x04A8)
Distribution Point='\\SMSServer\SMSPKGC$\TOP00004\', Locality='REMOTE', DPType='SERVER',
Version='5430', Capabilities='<Capabilities SchemaVersion="1.0"/>'
In LSNapApplyLocationFilter for request {B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}.
In CCCMNAPLocationHandler::FilterLocations.
Filtering locations for request {B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}.
In CCCMNAPLocationHandler::IsSystemInQuarantineState.
Machine is not in quarantine. No need to filter.
In CLSCallback::~CLSCallback
Calling back with locations for location request {B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}     
                                                                    Returning locations to ContentTransferManager
CCCMPkgLocation::CancelLocationRequest
In LSNapDeleteLocationFilter for request {B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}
In CCCMNAPLocationHandler::RemoveFilter.
Attempting to cancel location filter for requestID={B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}
Found matching filter for requestID={B246ADF4-F6E3-41A5-82F4-0F2C6387A35F}

Reviewing WMI to see job details may also be helpful.  All Branch DP jobs are stored in the root\ccm\PeerDPAgent namespace - an example is shown below.

Branch DP Job - WMI 

It may also be helpful to review the actual policy pending download by the Branch DP.  The screenshot below is from the policyspy tool.

Branch DP Policy - PolicySpy

 

 

 

 

 

ConfigMgr 2012, the Application Model and advanced Detection Logic

$
0
0

Two previous blog posts, here and here, detail the inner workings of the new ConfigMgr 2012 application model. When properly used the application model is a capable mechanism to help administrators manage and track applications throughout their lifecycle. As administrators use and are successful with the application model their understanding of its various capabilities increase along with a willingness to investigate how the application model can be better leveraged to solve unique business scenarios. There are several places in the application model that administrators might leverage to uniquely address specific needs. This article focuses on the detection methods available for an application and how they can be used to do more than just detect whether an application is installed.

For this example, consider that Tailspin Toys has an interest in moving to the application model but, due to the need to support legacy business systems, that move is viewed as difficult. The legacy systems in question require the creation of a local ‘flag’ file on systems where deployments are executed to indicate to legacy systems that the deployment was successful. These ‘flag’ files may also contain other information relevant to the deployment.

Because of the stated requirements a decision has been made to continue repackaging and delivering deployments with the legacy package model, which has been useful to support these legacy systems historically.

If the above decision were to be made it would result in the continued use of a legacy approach to package creation and delivery and also likely lead to significant delays in updating dependent legacy systems. Instead, what options exist with the application model to solve this scenario? There is no feature of the application model that will allow deploying a ‘flag’ file as part of application deployment after all! If limiting only to ‘in box’ options as described that statement is correct. With some inventive thinking, though, the application model is fully capable of solving this scenario.

From the title of this article it is obvious that discussion will focus on the detection methods available in the application model. Before getting into details of the example scenario a review of the options available for detection is useful.

Detection methods
Application detection methods are used to determine whether the application is installed. If an application is deployed to a system where it is already present, then a properly configured detection method will find that application and the application reinstall will be avoided. If the application is not present it will be installed and the configured detection method will be used to verify the installation once complete. An install that can be verified by the detection method will be viewed as successful while an install that cannot be verified with the configured detection method will be seen as a failure. For required application installs the detection method will be reevaluated every 7 days (be default and changeable in client settings) to confirm the application remains installed. If the application is still installed no further action is taken. If the required application is no longer installed it is reinstalled.

There are four categories of options that can be used for a detection method – MSI, File/Folder, Registry and Script.

MSI
clip_image001

An MSI detection method leverages the unique product ID of an MSI application to determine if that application has been installed. In addition to the product ID it is possible to also configure a version check. For a manufacturer supplied MSI this method is reliable. For applications that have been repackaged into an MSI take care to make sure the configured detection method is actually configured to detect the installed application rather than the repackaged MSI. Failure to do so can lead to failure or inaccuracy in other areas of the application model.

File/Folder
clip_image002

For applications other than MSI’s or if a detection method is preferred that does not rely on the MSI then file system based detection is an option. With this option it is possible to configure detection of specific files and folders that must be present in order to consider the application installed.

Registry
clip_image003

Most modern applications will write to the registry during install. If this is the case for the application being deployed the registry can be a very useful choice for detection method. Detection logic configured here can focus on any registry hive – HKEY_CLASSES_ROOT, HKEY_CURRENT_CONFIG, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE or HKEY_USERS – and configuration can focus on a specific key or a specific value.

Script
clip_image004

Scripts – the ultimate in flexibility! If a detection method is needed that is beyond what can be delivered with other options detection scripts provide a great solution. There are three supported scripting languages – VBScript, PowerShell and Jscript. The script window isn’t much of an editor so use an external editor to build the script and, when ready, paste into ConfigMgr.

With script the detection options are almost infinite and it is scripts that are the solution to the scenario described earlier. The scenario details specify that a ‘flag’ file be created to indicate that an application is installed. If the application is not installed the ‘flag’ file should not be present. In addition, the contents of the ‘flag’ file need to contain certain detail. A scripted detection method is the perfect solution to not only handle detecting the application in a more modern and sophisticated way but also can handle creation or deletion of the ‘flag’ file depending on the results of the detection.

Two script examples are show – the first and more complete one was built using VBScript. The second and bare bones script was built using PowerShell.

VBScript
clip_image006

PowerShell
clip_image008

Putting it all together
Bear in mind that there is no requirement to only use one type of detection. Mix and match as much as needed between MSI, file/folder and registry. Note that with script there is no ability to add other options. Whatever the choice bear in mind to avoid creating detection logic that is unnecessarily time consuming.

Finally, the scenario described is but one example scenario. Many more scenarios exist that may not exactly fit within the application model on first inspection but with additional thought many things are very workable.

Speaking at Live! 360 Orlando

$
0
0

I’ll be speaking at Live! 360 Orlando, November 16-20.
Surrounded by your fellow industry professionals, Live! 360 provides you with
immediately usable training and education that will keep you relevant in the
workforce.

I’ll be presenting the following sessions:

  • Windows 10 Deployment with Config Man OSD –
    Planning and Strategy  
  • Workshop: Application Deployment – The
    Configuration Manager Way

SPECIAL OFFER: As a speaker, I can extend $600 savings on
the 5-day package. Register here: http://bit.ly/LSPK69Home

All roads lead to Live! 360: the ultimate education
destination! Bring the issues that keep you up at night and prepare to leave
this event with the answers, guidance and training you need.  Register
now: http://bit.ly/LSPK69_Reg

 

Inventory or Compliance Settings–Making the Right Choice

$
0
0

Inventory, for both hardware and software, is a long standing capability of ConfigMgr. Both features work well and do what they claim –provide an inventory of either hardware or software on a system based on specific configurations. Inventory is also a very familiar component for anyone who has worked with any version of ConfigMgr. Because it is familiar it is often the automatic option used when doing anything ‘inventory-like’. But is it the best option? That is the question posed by this article.

Up front let me state that using inventory is not a bad option. Depending on specific business needs it may be the only option. But, it shouldn’t be the automatic option just because it is familiar.

ConfigMgr 2007 introduced the Desired Configuration Management feature, now renamed Compliance Settings in ConfigMgr 2012. Compliance Settings allows for quite a number of useful and cool options for enterprise environments and it can be used for some operations that are very inventory-like. With that in mind, let’s explore both options with a bit of history and detail. Keep an open mind as choosing the correct option will make for smoother and more useful operations. One other note, while this article does discuss inventory and compliance settings it is in no way exhaustive of the topics but is intended more to help the reader critically think through options instead of just ‘going with the default’. If you are unfamiliar with the inventory and compliance settings features, it is best to stop and fully understand these capabilities. After doing so this article will be more useful!

Before discussing Compliance Settings a reflection on inventory. Administrators don’t think much about inventory any longer. It’s automatic. Defaults are often accepted and that is how inventory operates. Where customizations are made it is often to add additional detail to the inventory vs. evaluating what actually needs to be collected. Add to this that inventory can be confusing, especially for those new to the discussion. What, exactly, is evaluated by hardware inventory? What is evaluated by software inventory? Well, that’s obvious. Hardware inventory evaluates information about hardware and software inventory evaluates information related to software. Those two silos are largely true but there are glaring places where they are not true. As an example, which inventory mechanism provides the detail visible in the installed software node?

image

Which inventory mechanism provides the detail visible in the System Console Usage node?

image

An answer of hardware inventory would be correct for both.

But, wait, hardware inventory is used to inventory both hardware, software and console usage? Yes. Then why is software inventory needed? The answer is that in many cases software inventory may well no longer be necessary. That decision ultimately is dependent on business requirements. Further discussion on that topic is below. Before proceeding a bit more discussion about hardware and software inventory.

Hardware Inventory
When initially introduced hardware inventory was all about hardware. Over the product iterations there were additional categories added to the mix to the point that hardware inventory might better be referred to as WMI inventory. Why WMI inventory? The reason is that hardware inventory is completely dependent on WMI as it’s only source for information. Every grouping of values in any hardware inventory node comes directly or indirectly through WMI. A quick look at the available hardware inventory classes will show just how true this is. A number of default classes are present and when selecting to add custom classes it is clear that the only option is to add from WMI.

image

OK, OK, I can hear the comments. Steve, you don’t have to just inventory from WMI. It is possible to inventory from the registry as well. After all, the installed applications class seen earlier is pulled directly from the registry. True – but THROUGH WMI. Let me state again – WMI is the sole source of information for hardware inventory. In addition to the ‘obvious’ WMI classes It is possible to programmatically create various providers in WMI that can be leveraged to pull information from other sources such as the registry.  The WMI registry provider is a programmatic method to access registry information through WMI.  So, in addition to the GUI customization options there is also the Configuration.MOF file located at %Program Files%\Microsoft Configuration Manager\inboxes\clifiles.src\hinv, which details all default classes that leverage the WMI Registry Provider for inventory collection.  This file is a good resource to know which classes pull their data from the WMI Registry provider and is an example of how to craft your own classes that reference registry information if needed.  Editing and understanding this file is beyond the scope of this discussion but a quick glance will show how the specific WMI provider, RegProv, is being leveraged to access a particular registry key. Further detail is given to indicate what items should be collected during inventory of that registry location.

image

So, hardware inventory relies on WMI exclusively. What does software inventory use and how is it different?

Software Inventory
Like hardware inventory, software inventory is done using WMI but the process is different. Specifically, software inventory leverages WMI by calling methods. These methods are responsible for scanning the system, collecting and reporting results. The important thing to know about software inventory is that it can, and often does, return a LOT of data and it may take hours to run. The reason for this is that while software inventory is running the hard disks are being constantly accessed to read data. This has the potential of interfering with user activity on the system so throttling is built in to slow down when a system is under load. The result can be a very long time period, sometimes hours, required to collect software inventory!

Compliance Settings
Compliance Settings is markedly different than inventory but has unique capabilities that make it very flexible and powerful. Like inventory, the use of compliance settings has ‘grown up’ as ConfigMgr has continued to develop. The focus of discussion here centers on the ability to build custom configuration items. Unlike inventory, where a significant number of preconfigured classes are default, Compliance Settings Configuration Items are a blank slate and requires administrator effort to build configurations of interest.

image

As a result, unintended clutter is avoided because only the specific items important to a given environment are configured. Compliance Settings evaluations will return a Boolean result for each item evaluated and configurations can be either judged to be compliant or non-compliant individually or in a group of settings. No actual data is collected or stored beyond the result of the evaluation. The result is a very efficient, focused and lean evaluation mechanism with myriad uses.

Inventory ‘Categories’
When considering inventory there are two broad categories of data evaluation – true inventory and detail checking (my descriptive categories, not official Microsoft categories). True inventory is a scenario where it is important that all specific details about a particular object being inventoried are collected and stored. For hardware inventory, as an example, there might be a need to store detail about the number and type of CPU sockets available on a system. For software inventory it might be important to store the file version, creation date, file size and filename.

By contrast detail checking is a scenario where the need is only to evaluate whether a particular configuration is present or absent. In the hardware example it might suffice to know whether a system has a single CPU socket vs. multiples without storing potentially redundant extra information. For the software example it might be sufficient to know that a particular file exists and with a specific version while the additional effort to store the file attributes is redundant.

The Challenge
With the discussion framed its decision time! Specifically, which method of data collection is actually needed to satisfy the business requirements of inventory across the environment? And, why do you really care?

In any scenario It is a good idea to only build what is needed to satisfy business needs. For inventory, collecting detail beyond what is needed will add complexity, may increase privacy or regulatory concerns, can impact performance on local clients and servers and may result in an expansion of the database beyond what is really needed. With a larger database the backup and restore times will increase as will storage costs. Accordingly, and as a general rule, start small and add where needed. That approach is far easier than starting large with plans to eliminate excess data in the future!

Remember, just because data can be collected in a certain way doesn’t mean that way is the best or most efficient! As an example, if business requirements dictate that ‘true inventory’ be performed and all of the associated attributes stored then that route is easily accomplished using standard inventory techniques. If ‘detail checking’ style ‘inventory’ is sufficient there are multiple advantages to be gained in terms of flexibility, speed and efficiency. Remember, ‘true inventory’ collects and stores the data requested where ‘detail checking’ simply stores a Boolean value indicating whether a particular configuration is present but no data is collected.  Regardless of the method chosen, rich reporting can be brought to bear to make full use of the data.

Deciding which method to use is totally up to the needs of a specific organization. Experience has shown that quite often both methods of data collection are needed and useful. When choosing the path to follow for a given scenario consider a few points.

Considerations
Familiar
Standard inventory has been a part of ConfigMgr for a long time and is familiar to most administrators. It might be easier to just configure something to be collected by inventory and move on but that approach may not be the most efficient and, over time, may result in inefficiencies being introduced to the overall environment.

Efficiency
Compliance settings will likely be more efficient than inventory. As already stated, Compliance Settings allows for focused vs. more broad based detection scenarios. In scenarios where the data of interest is a known quantity, Compliance Settings may be the better choice compared to traditional inventory.

Flexibility
With traditional inventory, systems across the environment largely run their evaluations according to the same schedules. At each scheduled evaluation all configured items are re-examined and processed. With Compliance Settings the administrator has a choice – either run at the global evaluation interval or according to an independent schedule suitable for the item being evaluated. There is also a difference in the data being returned between inventory and compliance settings. Inventory is engineered to be efficient but due to how it works will return larger data sets than what is returned by compliance settings.

Database Bloat
Disclaimer: Queries are given below to pull certain data from the ConfigMgr SQL database. These queries are safe because they are only ‘read-only’ select queries. The results, however, may inspire administrators to make changes to the database based on perceived inefficiencies. Changes to the database are only supported when working with a Microsoft support resource. The vast majority of changes that may be needed to the database can be managed using the console and that is your safer and supported option. Making changes directly in the database requires a full understanding of the database and, if done incorrectly, can have significant negative impacts on a ConfigMgr environment.

Compliance settings evaluations return a yes/no type of response by sending state messages. Inventory results will return a substantial list of data in most environments that can consume significant database space. As a quick example just look at the number of rows of data that are currently in the softwarefiles database table for your environment by opening SQL and running the following query against the ConfigMgr database.

image

Even small to medium environments will store millions of rows of data for just software inventory! This does not include the size consumed by hardware inventory. In fact, the software inventory tables are generally some of the biggest in the entire database! The point?  If you don’t need software inventory don’t collect it.  If you do need software inventory be sure collection is properly focused to just the data that is required.

It is also interesting to know what the largest tables are in the database. The query below will pull that information.

Note that this query will sort by the tables with the largest row count. Tables with large row count may or may not equate to the biggest tables but it is an interesting view. In addition to row count the query does pull total size of a table so with slight modification it would be possible to sort the list based on true space used by the tables.

image

In a typical environment the software inventory tables and status message tables (which is another discussion entirely) will be near the very top of the list. In my lab, the result is as follows. The top consumer in my admittedly small lab where software inventory is disabled is historical hardware inventory data for tracking process information. So, while hardware inventory data may not be at the top of the list it still may be very impactful to the overall size of the database. The way hardware inventory data is stored compared to software inventory is different. The various hardware inventory classes store their information in <HardwareInventoryClass>_DATA and <HardwareInventoryClass>_HIST tables. To truly see how much database space is being consumed by just hardware inventory requires adding up all of the space consumed by all of these table combined.

image

While SQL is capable of efficiently working with extremely large databases there can be an impact to performance when maintaining such a large database, including slow data performance due to improper indexing (make sure the ConfigMgr indexing task is enabled – it is disabled by default), long backup and restore times and more.

Tuning
To operating efficiently software operation and performance needs to be tuned. Inventory is no different but tuning in this category is often overlooked. What information is being inventoried by the inventory process? What hardware classes and associated attributes are enabled for collection? Do you know? Probably not. Is the software inventory process fine tune and focused? In most environments, no. The general rule of thumb is that if there is no specific need to inventory something then don’t do it. Properly tuning the inventory collection process will improve efficiency and focus of inventory operations. It will also help reduce unnecessary bloat of the database. Using compliance settings tends to encourage a more focused approach since configurations are intentional.


OSD – Standalone Media, Version Control and Auto-Updating

$
0
0

Standalone media is a very useful option for building bare metal systems in areas with limited or no network access. Standalone media is also a great failsafe solution to allow builds to continue when problems exist that prevent contact with the ConfigMgr server or related systems.

When using standalone media in scenarios where zero network connectivity exists the concepts in this article won’t apply. If standalone media IS being used in areas of limited or even full but slow network connectivity, read on.

One of the challenges of using standalone media is the possibility that a particular media may become outdated resulting in builds that are immediately out of date. Ensuring media remains up to date often is left to the ConfigMgr administrative team to coordinate. But even with the best efforts such processes are often inadequate. Instead, what if there were a process available to engineer the task sequence itself to make sure it is up to date before proceeding with imaging? It is actually really easy to configure such a scenario and ensure out of date standalone media will not run. A further benefit of such an approach would be to offload the work of maintaining media to the users of the standalone media.

The process demonstrated in this article makes use of some task sequence tweaks and also Orchestrator. Please note that the example provided here is just that – an example. Further refinement will be necessary for use in a production environment. Also, the example shown is designed for a task sequence delivered through standalone media. This example could be extended to other scenarios as well. On to the example.

The example makes use of a two network shares, task sequence customizations and a custom Orchestrator runbook.

Network Shares

Two separate shares are used in this example.

The first share will host a text file where the expected current version of the task sequence will be published. Each time the standalone task sequence is updated administrators will need to version up the text file in the share to match the version reflected in the task sequence. The task sequence will be shown shortly.

image

The second share will host the most recently built standalone media iso file. This will be the share location where users of standalone media will be able to obtain the latest iso build for use in updating their media.

clip_image003

Task Sequence

A standard, wizard generated, task sequence was created and then modified to add a Version Checking section, as shown.

clip_image005

Just three easy steps.

Set Task Sequence Version to Variable

This step defines the specific version of this particular task sequence and stores it as a variable in the task sequencing environment. In this case the assigned version is 15. Each time the task sequence is updated and a new version released the version number within the task sequence will be incremented. Not only does this facilitate version control but it also helps get in the habit of treating a task sequence like code and adopting all of the methodical change processes that are used when introducing new code to an environment.

clip_image007

Connect to Network Folder

Next the task sequence maps a network drive to the share hosting the master version information.

clip_image009

Retrieve Current Task Sequence Version

The production task sequence version will be stored in the CurrentTSversion.txt file seen earlier. This step will retrieve the version number from that file and publish it as a variable in the task sequencing environment.

Note that in this example a run command line step is used which calls PowerShell directly and passes the script, all in the command line. Typically, a package and Run PowerShell Script step might be used but for something this simple the Run Command Line step is easier and self-contained.

clip_image011

powershell.exe -executionpolicy bypass -command "& {$tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment; $tsenv.Value(‘CurrentTSVersion’) = get-content z:\CurrentTSversion.txt}"

Finally, on the Imaging Section group, logic is added that will compare the TSVersion variable, hard coded at the beginning of the Task Sequence, and the CurrentTSVersion variable, dynamically created after reading the txt file.

clip_image013

In order for imaging to proceed the value contained in both variables must match, indicating that the task sequence being used is, in fact, the current one.

The result? No more out of date imaging!

But there is a second part. A process that will detect a version change and automatically generate a new iso file that technicians will use to update their media. That part is handled with a very simple Orchestrator runbook. As mentioned earlier, the runbook is simply an example. More elegant methods could be brought to bear in production.

clip_image014

For the example the Orchestrator runbook will execute every hour using a Monitor Date/Time activity, shown here with the label Execute runbook hourly.

clip_image016

The next action compares the filename of the current standalone ISO file against the version of the task sequence stored in the CurrentTSVersion file. The ISO is expected to have the version number embedded as part of the filename so if there is not a match then something is wrong and the ISO needs to be regenerated. The script hard codes the server and location of the ISO and TXT files being used for comparison. While this works fine it may be more flexible to leverage variables in the Orchestrator environment and pass them here to allow for moving these files, if ever required, without requiring direct changes to the script.

clip_image017

$PSE = Powershell {
$script={
$Match="No"
$CurrentTSVersion=get-content C:\TSVersion\CurrentTSVersion.txt
$LastISOGeneratedFile=get-childitem c:\standalone
$LastISOGeneratedFileName=$LastISOGeneratedFile.BaseName
If ($LastISOGeneratedFileName.Contains("$CurrentTSVersion"))
{
     $Match="Yes"
}
$Match}

invoke-command -computer labsrvcmcas -scriptblock $script
}

This step doesn’t make any decisions about the Match value but simply adds it to the databus by specifying to do so on the Published Data tab. In addition, the script detected the published current version of the Task Sequence and publishes that to the databus as well.

clip_image018

With the data published to the database the runbook connector is leveraged to decide whether to proceed. More robust logic may be useful at this stage to make sure specific requirements are better handled. For the example the logic simply checks the match variable to see if the ISO file has the currect version number located in the filename text.

clip_image019

$PSE = Powershell {
$script={
$file=get-item C:\TSVersion\CurrentTSVersion.txt
$filelastwritetime=$file.lastwritetime
$TimeDiff = new-timespan -Start $filelastwritetime -end (get-date)
$TimeDiff=$TimeDiff.hours}

invoke-command -computer labsrvcmcas -scriptblock $script
}

If the versions are found not to match, PowerShell is used to trigger the creation of the new ISO which will be stored in the standalone folder.

clip_image020

$PSE = Powershell {
import-module($Env:SMS_ADMIN_UI_PATH.Substring(0,$Env:SMS_ADMIN_UI_PATH.Length-5) + ‘\ConfigurationManager.psd1’)
CD CAS:
$NewISOFileName = "Standalone" + "\`d.T.~Ed/{ADA9AA9E-F470-4340-A7E9-E9A25AD3DA03}.CurrentVersion\`d.T.~Ed/" + ".iso"
Remove-Item \\labsrvcmcas\\standalone\*
New-CMTaskSequenceMedia -StandAloneMediaOption -MediaInputType CDDVD -MediaPath "\\labsrvcmcas\standalone\$NewISOFileName" -ProtectPassword 0 -TaskSequenceId "CAS00065" -TaskSequenceDistributionPointServerName "labsrvcmps1.contoso.com"
}

Another easy Orchestrator step that could be added would be to send email notification to interested parties when a new ISO is available.

Compliance Settings – Examples

$
0
0

Note:  These screenshots are difficult to read.  The exported configuration items are attached along with a  few additional examples.  They are intended only for import in a lab environment.

Compliance Settings, formerly Desired Configuration Management in ConfigMgr 2007, has been a key ConfigMgr capability for years and remains one of the most capable, but often underutilized, feature in the product. Part of the reason might be that Compliance Settings is a ‘build it yourself’ component that can leave an admin struggling to fully understand capabilities and implementation details.

Tools, such as Security and Compliance Manager (SCM), provide examples of effectively configuring Compliance Settings to address specific needs. Administrators are able to leverage given examples to help decrease their Compliance Settings learning curve. But even with SCM the examples given may not cover all of the available capabilities.

Many examples exist for mainstream detection methods, such as registry or file system. It may well be a struggle to find examples of less mainstream options such as reading an XML file or the IIS metabase. Having also been frustrated with the lack of examples I finally decided to illustrate each one. The purpose of this post, then, is not to cover the capabilities of Compliance Settings in detail but, by illustration, to show examples of how the various methods could be used. Even with the examples below there is much more that could be illustrated. Hopefully the examples will be enough to help you explore the full capabilities without the headache of having to figure it all out from scratch.

Active Directory
The goal of configuring an Active Directory detection is to craft the proper LDAP query. The dialogues here help with that. The individual Distinguished Name, Search filter, Search scope and Property values are the only configurable items and, once configured, are combined to generate the resulting LDAP string. Once this string is complete it is possible to configure further rules to either check for an existence of the property, as in the example, or check for a specific value associated with the property.

clip_image002clip_image004

Assembly
An assembly allows detection of specific items in the OS global assembly cache (GAC). The GAC specifically stores assemblies that have been designed to be shared by several applications on the computer. The idea is that if a required assembly is not present, of an incorrect version, etc., it could cause problems with key applications. The example given shows a configuration to simply detect whether a required assembly is present in the GAC.

clip_image006clip_image008

File
The file detection is able to check files or folders for their presence and various attributes. The example given checks a folder for a specific value for CreatedDate.

clip_image010clip_image012

IIS Metabase
The IIS Metabase contains configuration detail for hosted web pages. The example given checks for the existence of a specific property. Note that the path format simply is a navigation of the metabase tree as might be seen using a tool like metabase explorer.

clip_image014clip_image015

Registry
The registry detection allows checking for the existence of a registry key or registry setting and also detection of specific configurations for a given setting. With this detection method it is also possible to configure automatic remediation of a misconfigured setting.

clip_image017clip_image019

SQL
The SQL detection method allows for running a query against SQL servers. The example given shows using this method to detect a given setting returned by a simple SQL query. Notice that the trick here is to cast your output as a known column name that can then be used in any evaluations.

clip_image021clip_image023

WQL
Similar to the SQL format the WQL query allows for checking for the existence/configuration of an element found in WMI. With this detection method it is also possible to configure automatic remediation of a misconfigured setting.

clip_image025clip_image027

XML
The XML detection allows for opening and reading the contents of an XML file for a specific path and configuration. The XPath query used simply walks the path of the XML to the specific item of interest. If the XPath query doesn’t make sense spend just a few minutes looking at a properly formatted XML snip and you should be able to quickly understand how the pathing works.

clip_image029image

Script
The final detection method to discuss is the script. If you can’t configure your detection job with one of the previous detection methods then the script method is available. With the script method the detection possibilities are virtually limitless. With this detection method it is also possible to configure automatic remediation of a misconfigured setting. If remediation is required a remediation script must also be supplied.

clip_image033

The sample PowerShell script below is used to detect whether the WordPad process (or any process) is running multiple times. The script initially sets the $compliance variable to true to indicate compliant and then proceeds to test the condition. If a non-compliant scenario is found the $compliance variable is changed to False.

$ProcessName=”WordPad”
$compliance=”True” ##”Compliant”
$CountofSuspendedProcesses=0
$Processes = Get-Process $ProcessName -erroraction 0

ForEach ($Process in $Processes)
{
    
if (-not $Process.responding)
          {
              
$CountofSuspendedProcesses = $CountofSuspendedProcesses + 1
        
}
}

if ($CountofSuspendedProcesses -gt 0)
{
    
$Compliance=”False” ##”Non-Compliant”
}

$Compliance

In all cases the script returns the variable being used for tracking compliance testing. The variable used can be anything as long as the script returns a value. The returned value is then tested by the compliance rule to determine compliance.

clip_image035

Summary
And that’s it. With these examples it should be very possible to configure the exact type of compliance scenarios that should be tested in any environment.

DemoCIs.zip

OSD SMSTSDownloadProgram Option

$
0
0

Had an interesting scenario recently while working with a large customer with a LOT of locations. The challenge was that they needed to stage imaging content on local servers at each location to facilitate imaging without overloading the network links. OK, easy – right? The twist was that they could not make the local servers distribution points in any way. For various reasons they also were not able to implement BranchCache or third party equivalents. They were also not on ConfigMgr Current Branch which would at least give them Windows PE caching. Thankfully ConfigMgr 2012 was in place!

ConfigMgr 2012 SP1 introduced the SMSTSDownloadProgram variable which I had overlooked completely. It is easy to overlook something until you need it!

The SMSTSDownloadProgram variable introduces a user configurable alternate content provider – a way to specify an alternative process for downloading content when needed by the task sequence instead of using ConfigMgr’s built in mechanism. The only concern with using the alternate content provider is that doing so means the user takes on full responsibility for ensuring content is available when needed. Also note that the alternate content provider only supports download and execute scenarios. But when engineered well the alternate content provider can bring forward some great flexibility during the imaging process. In fact, this variable and the alternative content provider approach is the method used by third party vendors to ‘intercept’ and obtain content using their tools during a task sequence.

A quick glance at the documentation for this variable leaves a big question though. How exactly is this variable used and how does it actually work? To test a demo of the approach was created. After a bit of trial and error it actually worked! The hardest part about this was getting the full concept. Implementing is very easy. Read on to see how this was accomplished but be aware that there could be several other ways in which the alternate content provider could be effectively implemented. The example is just one of them.

When the SMSTSDownloadProgram variable is set the task sequence will use whatever detail is stored in the variable when it needs content during task sequence execution. The value name suggests the kind of detail stored in SMSTSDownloadProgram – a program. The program can be anything – an EXE, a script, etc. The key is ensuring the referenced program is able to effectively access and download the content needed by the task sequence.

At the point the task sequence engine needs content it will attempt to look it up. Normally this would be through standard content lookup mechanisms. If the SMSTSDownloadProgram variable is defined the task sequence believes the alternate content provider is configured and will ignore normal content lookup processes and, instead, will hand off content acquisition to the alternative content provider by pass the program defined in SMSTSDownloadProgram two variables – the content ID and the location to store the content on the imaging system. The program referenced by SMSTSDownloadProgram will need to be able to handles these passed variables. The example shown shortly simply uses XCOPY.EXE as the executable. It’s simple but it works and is available in all builds of Windows.

The best way to illustrate using the alternative content provider is by an example.

Staging Content
Using the alternate content provider typically means that content will be retrieved without the use of ConfigMgr distribution points. In some cases it might be of interest to implement the alternate content provider for other purposes and still use the ConfigMgr distribution points for a content source. If there is such a scenario accessing content from the ConfigMgr distribution points will need to be managed by the alternate content provider and with effective use of the related ConfigMgr API’s.

The example assumes a complete separate source of content will be used during the task sequence. As such, that source needs to be created by building a share to store the content. In the example the share is called TSContent. Immediately in the share a folder called Packages is created and then, inside Packages, the content needed by the imaging process is staged. Building and maintaining the share used by the alternate content provider is a completely manual process. Note that the folder names of the content follow the ConfigMgr content ID format. The task sequence engine will reference the content by ID when it is requested from the alternate content provider.

clip_image001

The share is just a storage location. To make use of the share the task sequence needs to reference a program which will make use of the content. To hook this all up only a couple of steps are needed in the task sequence, as shown.

clip_image002

That’s it. Really easy.

The first step is simply to map a known drive letter to the network share OF THE PROGRAM, not the content share.

clip_image003

This step may or may not actually be needed depending on how the alternate content provider is actually implemented. In the example the alternate content provider code is stored on a network share instead of embedding in the imaging process directly (by adding to Windows PE, etc.)

It might also be noticed that the share being connected is NOT the one referenced above where the content is actually stored. Instead, this share is the location that contains the code which is the alternate content provider. The contents of the share are shown.

clip_image004

The code used by the alternate content provider can be anything. Here the code is simply a batch file named ACP for alternate content provider with 1 added to indicate the first alternate content provider. Ah, interesting. What is meant by the FIRST alternate content provider. Typically, only one alternate content provider is needed but it would be possible to have more than one and have then change during the course of the task sequence just by resetting the SMSTSDownloadProgram variable as needed.

The contents of ACP1.cmd are really simple. As already mentioned, for the example XCOPY.EXE is being used so the cmd file simply calls it.

xcopy \\cm12prisql\TSContent\Packages\%2\* %3\* /cherkyfs

A couple of questions might be front of mind at this point.

· If the cmd file is simply calling xcopy why not just store xcopy in SMSTSDownloadProgram instead of wrapping it in a cmd file?

That is a great question. A couple of reasons come to mind.

o The task sequence engine makes use of the alternate content provider by passing variables to whatever is stored in SMSTSDownloadProgram. While it may be possible to do this with xcopy directly (don’t know, haven’t tried) it certainly is easier to conceptualize passing variables to a program – even if that program is just a cmd file.
o Leveraging a cmd file – or VBscript – or PowerShell – or EXE – or whatever – gives more flexibility. It’s possible to start really simple but expand later as needed.

An example to illustrate. After the sample described to this point was built a tweak was needed. Some systems being imaged had really small disks so there was not enough room to store all of the content needed during imaging. Using the alternate content provider script, it was possible to clear the cache on the imaging system at each step to remove any content remaining there since it was no longer needed. The ACP1.cmd was tweaked to accomplish this.

rd c:\_SMSTaskSequence\Packages\ /S /Q
md c:\_SMSTaskSequence\Packages
md c:\_SMSTaskSequence\Packages\%3
xcopy \\cm12prisql\TSContent\Packages\%2\* %3\* /cherkyfs

· What are these %’s for in the command line?

o As mentioned the task sequence engine will invoke the alternate content provider by passing a couple of variables. The % represents where those variables will be inserted and are unique to batch programming. If other languages are used the representation will be different but the concept is the same. The task sequence engine passing the variables will be shown shortly.

The second step in the process is to actually create and populate the SMSTSDownloadProgram variable which flags the task sequence to use the alternate content provider stored here instead of standard content lookup processes.

clip_image005

With all of this in place is it possible to confirm the task sequence is using the alternate content provider? Yes. One obvious way is that if the alternate content provider is configured but isn’t fully working the task sequence will fail. Assuming the task sequence is working validating that the alternate content provider is being leveraged is really easy. Open the SMSTS.log and check for a couple of entries.

An entry similar to the following will be seen if the alternate content provider is being used:

Using download program z:\ACP1.cmd

An entry similar to the following will be seen each time the alternate content provider is called:

Set command line: “a:\ACP1.cmd” X:\windows\TEMP\SMSTSDownload.INI PS100004 C:\_SMSTaskSquence
Executing command line “z:\ACP1.cmd” X:\Windows\Temp\SMSTSDownload.INI PS100004 C:\_SMSTaskSquence

That’s it. Yet another way that imaging with OSD offers more flexibility and unique use cases.

Deploying OneDrive for Business – An Example

$
0
0

 

Deploying OneDrive for Business is a task that can easily be handled with ConfigMgr. Configuring the deployment is easy but the process is not typical. My colleague, Paulo Jorge Morais Dias, has written a blog on how to do this – available here. The process he describes is similar but different from what I have done so I’m also writing up the process. The differences between the two examples illustrate that there are at least two and, in reality, several ways to handle the deployment.

In this example OneDrive for Business is deployed using the application model and makes use of three separate applications.

Application 1 – OneDrive for Business – Registry Import

Part of installing OneDrive for Business requires configuring registry keys as needed. These registry keys could be added in several ways – GPO, a script deployed through ConfigMgr or several other ways. In the example the application simply deploys an exported set of registry keys using reg.exe as follows.

clip_image002

Command Line

Reg.exe import onedriveregistry.reg

The example OneDriveRegistry.reg file

clip_image004

clip_image006

The registry import application MUST be set to run under user credentials. The specific registry keys being added exist under HKEY_CURRENT_USER. Installing under system credentials would cause the registry keys to be added in the incorrect location.

Application 2 – OneDrive for Business – Main Executable

The OneDrive for Business main executable is OneDriveSetup.exe and is deployed silently.

clip_image008

Command Line

“OneDriveSetup.exe” /silent

clip_image010

The OneDriveSetup.exe MUST be installed under system credentials. One customer I have worked with had a challenge with the application working correctly until they enabled the option to allow users to interact with the application. In the example, settings allow the OneDriveSetup application to run ‘Whether or not a user is logged on’. In such a configuration there is no option to allow user interaction but in the customer’s case they had chosen the option ‘Only when a user is logged on’. In such a configuration the customers experience was that the OneDriveSetup portion never completed or appeared in Programs and Features until user interaction was allows. I have not tested such a configuration since it shouldn’t be common but including in the event others see the issue.

Application 3 – OneDrive for Business – User Install

The user portion of the install deploys OneDrive.exe. The OneDrive.exe is actually deployed as a part of installing OneDriveSetup.exe. The key to deploying OneDrive.exe is to ensure it is moved to the user’s profile and launched under users credentials. There are several ways to do this. In the example the command line runs a script which will first copy the OneDrive.exe source files from the ccmcache location to the user’s profile and then launches OneDrive.exe to facilitate the install.

clip_image012

Command Line

PowerShell.exe -executionpolicy bypass -File “DetectorsetOneDriveinstall.ps1”

The PowerShell script copies the source files from ccmcache to the user’s profile directory and then launches the install.

##Declare and set variables.
##Get user profile environment variable
$UserProfile = Get-Item Env:UserProfile
##Get the value item from the environment variable
$UserProfileValue = $UserProfile.value
##Define the path to the location where onedrive should be installed
$OneDriveLocation = “\appdata\local\microsoft\onedrive”
##Combine users profile with the location where the onedrive files are to be copied
$OneDriveTestPath = $UserProfileValue + $OneDriveLocation
##Check to see if the path already exists
$OneDrivePresent = Test-Path $OneDriveTestPath

##If the folder doesn’t exists, create it.
If ($OneDrivePresent -eq $False)
{
      
New-Item – ItemType Directory -Path $OneDriveTestPath
}

##The script current hard codes the ccmcache location. Adjustments to the script
##need to be made to convert the static ccmcache location to dynamic.
Copy-Item “c:\windows\ccmcache\6\*” -Destination “$OneDriveTestPath” -Recurse

##Launch the OneDrive.exe executable with appropriate command line options
Start-Process -FilePath $OneDriveTestPath\OneDrive.exe -WorkingDirectory $OneDriveTestPath -ArgumentList “/configure_business” -NoNewWindow -Wait

clip_image014

The deployment MUST run under the user’s context.

Conclusion

That’s all there is to it! There are some tweaks that could be made to accommodate environment specific details. In the example no configuration was done to accommodate uninstalls so that is also an area for additional configuration improvement but the basic configuration works perfectly!

Note: When deploying these applications, it is possible to deploy each one individually, link Application 1 and 2 in a dependency relationship or link all three together and deploy as a unit.

Speaking at Live 360

Speaking at IT/Dev Connections 2016

$
0
0

10-13th October 2016: IT/Dev Connections, Las Vegas, United States

itdevconnections-logoIT-Dev Connections is one of the biggest community driven conferences with a broad audience. This year I will present two sessions

 

Troubleshooting ConfigMgr and Intune – Deep Dive

This session will dive deeply into the ConfigMgr/Intune communications and will demonstrate how data flows across the connector and how to troubleshoot when things go wrong. The session will also discuss working with support when things go wrong on the ‘back end’ of Intune.

Content, Content, oh where art thou?

This session will review the content lookup mechanism in ConfigMgr and will include preferred and fallback configurations, roaming and more.  The session will also review the new content lookup caching mechanism and it’s impact on performance improvement.

Find more information here: http://www.itdevconnections.com

Pre-Staged Media the Flexible Way

$
0
0

Note: This article is not intended to demonstrate how to configure prestaged media deployments but, instead, to use prestaged media as another example of the great flexibility that can be achieved in OSD when you truly know how the system works.

When leveraging prestaged media in ConfigMgr the typical setup process is as follows:

· Identify which task sequence should be deployed by the prestaged process

· Use the Create Task Sequence Media wizard to generate the prestaged media WIM

Note: It is this step that also embeds the selected Windows PE version into the WIM. Windows PE is needed for the system to start up again after the prestaging actions are complete.

· Import the prestaged media WIM into the ConfigMgr console

· Create a basic task sequence to deploy the WIM

The keys steps in the process are to:

1. Specify the WIM file the prestaged wizard will build

clip_image002

2. Select the task sequence to be used in the prestaged process

clip_image004

The task sequence option here is just for convenience. Choosing the task sequence does NOT ‘lock’ the process to just using this task sequence.

3. Create a task sequence to deploy the content contained in the WIM built using the prestaged media wizard and deploy as normal. Notice that the task used is Apply Data Image and NOT Apply Operating System Image.

clip_image006

All well and good. One small wrinkle. In order for the process to run successfully the assigned boot image between the prestaged task sequence and the task sequence actually being deployed must match. If not, the process will error. Why is this true? When the prestaged WIM is created the version of Windows PE assigned as part of the prestaged wizard is injected into the WIM. The end result is that when the system reboots after the prestaged WIM is applied it will boot back into whatever version of Windows PE is on the hard disk. This is different from what happens with standard imaging where the correct version of PE assigned to the task sequence will be downloaded on the fly in the event of a mismatch. Pretty small inconvenience really but the inconvenience becomes larger when trying to leverage the prestaged process in an enterprise build environment where both x86 and x64 systems need to be built. Wait, is this really a big deal? An x86 boot image can be used nicely to build an x64 system. Right but remember that little gotcha just mentioned. If the boot image associated with the task sequence doesn’t match the boot image associated with the prestaged media the process will error.

Another wrinkle. What if the build process is happening in the same enterprise build location and there is a desire to have flexibility in the process? Maybe one time it is of interest to select one image to build and the next a different image should be deployed. Using the standard prestaged process this flexibility is a bit more cumbersome. It would be nice to have a prestaged process that allows using multiple prestaged builds dynamically. In such a case it would be nice to have a process to allow dynamic choices at build time as well as to dynamically choose which version of Windows PE should be staged during build.

It’s actually pretty easy to include this type of flexibility, even with prestaged media.

To build a solution that allows prestaging to work regardless of architecture type and assigned boot media we just need to adjust the process a bit.

Step 1 – Empty WIM

The first step is to create a completely empty WIM by capturing a blank disk. This is easily done using a VM with a blank disk and booted into Window PE. Once the WIM is create it will be imported into ConfigMgr.

clip_image008

Step 2 – Prepare a Fully Functional Prestaged PE ONLY WIM
With the blank WIM imported we now want to create a WIM that will only contain Windows PE. How do this? As referenced above the prestaged media wizard requires that a task sequence be specified before creating the prestaged WIM. By using a task sequence with a single Apply Operating System step referencing the imported empty WIM we effectively are able to leverage the prestaged media wizard to build a WIM containing only Windows PE that we can apply as needed.

Note: In order for the task sequence to show up in the prestaged media wizard it must have a boot image associated with it.

Once created the WIM is imported into ConfigMgr and used as part of the prestage media task sequence deployment. Applying this WIM will allow the system to reboot to a fully functional PE – just like happens in traditional Prestaged processing – while maintaining the flexibility needed in the process.

clip_image010

Step 3 – Create a Task Sequence to Deploy WIM Files

When creating the task sequence that will be used in this scenario the process will apply two WIM’s. 

clip_image011

The first WIM applied is the one that contains the actual image of interest. The WIM is applied using the Apply Operating System Image task.  Notice that the specific partition configuration is specified. Left alone the disk would be configured to load the image in the WIM after reboot. To make the process use the prestaged process we need to overlay the Windows PE only WIM created by the prestaged wizard.

clip_image012

The second imaging step deploys the Windows PE only WIM just created. This WIM is applied using the Apply Data Image task. Notice that the specific partition is specified and matches what was used in the Apply Operating System Image task. Also, the option to Delete all content is NOT enabled which will allow the first image applied to remain.

clip_image013

After applying the two images the system will be configured to shutdown.  When powered back on the system will load Windows PE. If preferred the command shutdown /r could be used to cause the system to reboot instead of power off.

Step 4 – Apply Desired Image

When the PC restarts it will enter Windows PE just as would be expected if using the standard Prestaged process and the specific task sequence to use for imaging can be selected.

clip_image014


ConfigMgr Current Branch–Software Update Delivery Video Tutorial

$
0
0

 

Check out this video tutorial I have posted over at the ConfigMgr blog here.

Description of the video:
The release of Windows 10 brought with it a change in the way updates are released – updates are now cumulative. Since the release of Windows 10 this same cumulative update approach has been adopted for the remainder of supported operating systems. While this approach has significant advantages there still remains some confusion about what it all means.

The video linked below was prepared by Steven Rachui, a Principal Premier Field Engineer focused on manageability technologies. In this session, Steven talks through the changes, why the decision was made to move to a cumulative approach to updating, how this new model affects software updating, how the cumulative approach is applied similarly and differently between versions of supported operating systems and more.

Comments
Please share comments on the video.  I am considering posting other videos like this on various ConfigMgr related topics and your comments will be valuable feedback for me to review.

ConfigMgr Current Branch–Windows Update for Business

$
0
0

Check out this video tutorial I have posted over at the ConfigMgr blog here.

Description of the video:
Ensuring software updates are applied across an organization is a key focus area for system administrators. Configuration Manager has been used by thousands of organizations for years to aid in this pursuit.  Other organizations have opted to use standalone WSUS for their software updates needs while still others may rely solely on the built-in engine to pull updates from Microsoft Updates.

Windows Update for Business, which was introduced around the time of the Windows 10 release, offers an additional option to aid administrators in the critical pursuit of ensuring systems are kept up to date. Understanding what Windows Update for Business is and how it can be implemented either standalone or through integration with Configuration Manager is critical, so you make the best choice for your business.

Comments
Please share comments on the video.  I am considering posting other videos like this on various ConfigMgr related topics and your comments will be valuable feedback for me to review.

ConfigMgr Current Branch – Express Updates

$
0
0

Check out this video tutorial I have posted over at the ConfigMgr blog here.

Description of the video:
Previous posts in this series have referenced the update approach to delivering Windows updates that was introduced first with Windows 10. One side effect of the cumulative update approach is that the single update released is larger than the individual updates of days past. Further, at each release of an update, size increases as additional updates are added over time. This can have a noticeable effect in an organization each month as these new and larger updates are distributed across the network

Express is a capability of WSUS and the Windows Update Agent that was added to help reduced the overall network impact of these larger updates. Express will identify just the portion of the update that is needed by the client and download only that piece. The effect is much smaller overall downloads on the client. Configuration Manager current branch 1702 (though 1710 and higher is recommended for best performance) added full support for Express. Understanding how Express works is important so that administrators know what to expect and can plan accordingly. In the video linked below,

Comments
Please share comments on the video.  I am considering posting other videos like this on various ConfigMgr related topics and your comments will be valuable feedback for me to review.

ConfigMgr–Software Updates video series

$
0
0

Recently I authored and published a few videos on the ConfigMgr team blog.  Based on feedback and response from those videos I have published the remaining videos in the series.  Check out the videos here.

From the ConfigMgr team blog post.
In January we presented a Software Update Video Tutorial series. This series hosted by Steven Rachui, was focused on the changes to software updates in Configuration Manager current branch. By popular demand we are adding another video tutorial for Office 365 updates. In this latest video Steven Rachui, a Principal Premier Field Engineer demonstrates configuring a deployment of O365 in Configuration Manager and then discusses the options available for delivering updates for that O365 installation using Configuration Manager 2012 and Configuration Manager current branch.

For people who are new to software updates or who want a refresher, Steven has made his Software Updates Foundations series available. The tutorials in the series are:

  • ConfigMgr 2012 – Software Updates – Part I – Introduction and Overview – This video begins a series detailing the Software Update component in ConfigMgr 2012.  Discussion includes comparing differences between ConfigMgr 2007 and ConfigMgr 2012.  Demonstrations also include a general walkthrough of Software Updates components in the console with brief explanation about each.
  • ConfigMgr 2012 – Software Updates – Part II – Server Configuration – This video focuses on detailing the server-side components, including detailed discussion of the Software Update Point – including data flow and configuration options. Discussion then turns to configuring software updates for deploying, discussing strategies for using software update groups, creating deployment packages and automated deployment rules. The session concludes by reviewing the various client-side configurations that are implemented through site server settings.
  • ConfigMgr 2012 – Software Updates – Part III – Server – Deep Dive – This video is a deep dive on the server side that investigates the back-end processing that happens when various administrative actions are taken in the console.  Topics covered include the replication of software update metadata and content data.  The session also discusses how information in the console is expressed in the database.
  • ConfigMgr 2012 – Software Updates – Part IV – Client – This video focuses on the client experience when deploying software updates in several scenarios.  Discussion centers around the client-oriented software update components and gives examples of basic and more advanced deployments to show the user experience.
  • ConfigMgr 2012 – Software Updates – Part V – Client – Internals – This video is a deep dive into the process of software update scanning and installation.  Demonstrations include a detailed look at the scanning process, a review of log files key to software update deployment and a look at a few key WMI namespaces involved in software update installation.
  • ConfigMgr 2012 – Software Updates – Part VI – Monitoring Software Updates – This video focuses on monitoring software update deployment and compliance.  Discussion includes the various nodes of the ConfigMgr console, the data available in each and briefly touches the various report options.
  • ConfigMgr 2012 – Software Updates – Part VII – Automating Software Updates – This video focuses on opportunities to automate routine tasks in software updates.  Specific demos and scripts show automation of software update group maintenance, deployments, maintenance of deployment packages and more.

We would love to hear your suggestions for topics for future series.

OSD Video Tutorial: Part I – Introduction and Basics

$
0
0

This is the first session of a series that will detail the Operating System Deployment feature of ConfigMgr 2012. The session provides base knowledge that answers the questions - what is OSD?  Why OSD? The session also provides a quick look at the ConfigMgr 2012 console showing and describing the various elements relevant to OSD.

Link to blog post and video

Steve

Viewing all 42 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>