The Risolv Blog

The things you need to know about if you want to get the most out of your IT setup.


Cloud Computing Today

Cloud computing reduces initial capital investment, and also provides new companies the ability to get up and running in a short amount of time.

These services also cater directly to business owners and users. Keep in mind, though, that as with most things, not all services are equal. The complexity of the entire solution increases as each additional service is managed by third-parties.

On-premise, cloud-based and hybrid solutions all have their advantages and disadvantages. In order to choose the best solution for your business, we can discuss with you the short and long term goals you have for your business, and decide what solution is right for you. Ask us today.

 


Retrieve List Metadata from SharePoint Online (Office365) Using Windows PowerShell and C#

One useful tool used at Risolv IT is Windows PowerShell,  a task automation and configuration management framework from Microsoft.

The PowerShell command line shell uses a scripting language built on the .NET framework, and  contains many useful functions, including the ability for third parties to write module to integrate with their applications and servers.

This is particularly useful when it comes to accessing and managing Office365 services such as Exchange and SharePoint. There are however, few cmdlets for interacting with SharePoint Online sites, which means a few extra steps are needed.

Recently I was required to write a script which would retrieve the metadata from a document library on a SharePoint Online site in order to apply that same meta data to a on-premise SharePoint server. The process was less straight-forward than some, as there are very few built-in PowerShell cmdlets for interacting with SharePoint Online (as opposed to SharePoint hosted on a local server). In addition, the files in question were located within a folder of the document library rather than in the root, which required some extra steps.

The base code was obtained from the ScriptingGuy blog at http://blogs.technet.com/b/heyscriptingguy/archive/2011/02/15/using-powershell-to-get-data-from-a-sharepoint-2010-list.aspx which I then modified to suit my needs. The following outlines the process and any issues/resolutions I found along the way:

Required Files

This solution will make use of C# code, in to that end, a few assemblies need to be present in order to make this all work using the SharePoint CSOM (Client-Side Object Model). The required assemblies are Microsoft.SharePoint.Client.dll, Microsoft.SharePoint.Client.Runtime.dll,System.Core.dll and System.Security.dll

The two SharePoint-specific libraries can be retrieved from on-premise installation of SharePoint (or by installing a trial version of SharePoint Server (http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=16631). They are located in %systemroot%\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\ISAPI

The System.Core.dll and System.Security.dll libraries should already be present in%systemroot%:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5 but if System.Security.dll is missing you may have to search online for it (For example:http://www.dllme.com/dll/files/system_security_dll.html).

Code

Now that all the required files have been downloaded, the real fun starts. For the first part of the code, we are using C# and running it in the PowerShell session thanks to Add-Type commandlet.

$cSharpCode = @"
using System;
using System.Collections.Generic;
using System.Security;
using Microsoft.SharePoint.Client;
 
namespace SPClient
{
    public class SharePointList
    {
        public static ListItemCollection GetList()
        {
            ClientContext clientContext = new
//Replace with your sharepoint site URL
ClientContext("https://example.sharepoint.com/location");
            SecureString pass = new SecureString();
//append each character of your password to the securestring
            pass.AppendChar('P');
            pass.AppendChar('A');
            pass.AppendChar('S');
            pass.AppendChar('S');
            pass.AppendChar('W');
            pass.AppendChar('O');
            pass.AppendChar('R');
            pass.AppendChar('D');
            SharePointOnlineCredentials creds = new
//Replace with your username
SharePointOnlineCredentials("Username", pass);
            clientContext.Credentials = creds;
//Replace with the title of your list
            List list = clientContext.Web.Lists.GetByTitle("List Title");
            CamlQuery camlQuery = new CamlQuery();
//if the files are in the root list and not a subfolder this should be changed to “</view>”
            camlQuery.ViewXml = "<View Scope='Recursive'><Query></Query></View>";
//Replace with the path to the subfolder in the list
            camlQuery.FolderServerRelativeUrl = "/location/List Title/Subfolder";
            ListItemCollection listItems = list.GetItems(camlQuery);
            clientContext.Load(list);
            clientContext.Load(listItems);
            clientContext.ExecuteQuery();
            return listItems;
        }
    }
}
"@
 
$assemblies = @(
    "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\Microsoft.SharePoint.Client.dll",
    "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\Microsoft.SharePoint.Client.Runtime.dll",
    "System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089",
    "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\System.Security.dll"
)

Add-Type -ReferencedAssemblies $assemblies -TypeDefinition $cSharpCode

Note: The version of System.Core needs to match the framework version of the SharePoint assemblies so it may need to be changed depending on your version (e.g. Version=3.5.0.0 for .NET 3.5 SharePoint assemblies). This code does all of the SharePoint related stuff and returns objects for each file. The objects still need to be converted to PowerShell objects however which is what the second part of the code accomplishes. 

$items = [SPClient.SharepointList]::GetList()
$table = @()
 
foreach ($item in $items){
    $obj = new-object psobject

    foreach ($i in $item.FieldValues){
        $keys = @()
        $values = @()

        foreach ($key in $i.keys){
            $keys += $key
        }
        foreach ($value in $i.values){
            $values += $value
        }
 
        for ($j = 0;$j -lt $keys.count - 1;$j++){
            $obj | Add-Member -MemberType noteproperty -Name $keys[$j] -Value $values[$j]
        }
    }
#put string parsing code here
    $obj
    $table += New-Object psobject -Property @{ Name = $($obj.FileLeafRef)}
}
$table | Export-Csv -path "C:\temp\metadata.csv" -noType -Encoding UTF8

And there you have it! This code will simply output each object to the console and output the file name of each object to a .csv file. It can easily be modified to output more properties.

Note that certain columns such as hyperlinks return an object rather than a string, so you will have to access that object’s attributes to get the data you need. For hyperlinks, you can access the string with $obj.hyperlink.description and the URL with $obj.hyperlink.url.

Another thing to note that I found problematic is that if you have a property column in your document library which can have more than one property (e.g. type = dog; mammal), this will cause all other values below “type” to shift down and so you will end up with values associated with the wrong properties (e.g. if type was above date and date was above colour, then type=dog, date=mammal, colour=1/1/2015). To remedy this, I added if statements to check how far down the values had shifted, then compensate by retrieving values from the properties they had shifted to (e.g. type=type; date).


Is Cheaper Always the Best Idea for Getting the Most Out of a Budget?

Yes I know there is always a department or corporate IT budget, with costs constantly straining the monthly or yearly boundaries. However as with construction, buying the cheapest equipment or parts may not always be the best long term solution.

I have recently been working on some systems for some clients where they saved money on the purchase of new machines, but now when they are planning system improvements, there will be significant impact due to the low cost machines they now have. New machines came with Microsoft Windows 7 Home edition installed on no name hardware in order to save on initial purchase.  However now, when they are looking forward at linking these machines together, the OS is a stumbling block. This project involves creating a Windows Domain, and joining the machines to it. Now the client will have to choose between

  1. Replacing the machines, for those not powerful enough for a newer OS
  2. Replacing just the OS, on the machines that can handle it, this will involve more labour

Discussing your Network and System current and future needs should be undertaken with your IT Staff or IT Solution provider prior to purchasing. This type of communication will help your company manage the cost of your current IT infrastructure as well as meet future budgets.


Azure Active Directory Sync Services (AAD Sync)

As of September there is a new and improved tool for synchronizing Active Directory to Office 365. It’s much better so upgrade your old synch tools out there.

To perform a manual update we now use the DirectorySyncClientCmd.exe tool. The Delta and Initial parameters are added to the command to specify the relevant task.

This tool is located in:
C:\Program Files\Microsoft Azure AD Sync\Bin


Cloud Myths

A few myths about Cloud services that we hear all the time!

Moving to the Cloud is cheaper.
Think of a Cloud service in terms of real estate. Would you rather rent or own? Which is better? Which one saves you money? The answer is murky in either case and you need to look at these things on an individualized basis.

You should have everything in the cloud.
Base your needs business/computing needs in reality. A hybrid solution is not a bad thing.

Everyone is doing it.
The fact is that while there is a trend towards cloud services, the bulk of the IT spend in America is spent on in-house infrastructure. By a wide margin.

The Cloud is less secure.
Cloud services are no more or less secure than in-house services. Due diligence should take place in either case. Take care with some providers as concerns your privacy and ownership of the stored data.


To Cloud or not to Cloud?

At Risolv we work closely with our clients to navigate the complex landscape that is IT. One of the principal movements in IT and business today is the move to “the cloud”. The term “cloud” has such an appeal these days that if you or your business are not “in the cloud” well, you might as well pack up your Blackberry and laptop (what no tablet! ) and slide into oblivion. I think the true solution is more nuanced and one size does not fit all.

So what is “the Cloud”?

The term “cloud” comes from the simple fact that we (techy people) represent the Internet as a cloud shape when producing technical drawings. That’s it. The cloud is the Internet. Allright, I know that it can be defined as Infrastructure, Platform or Application as a service but the lines are so blurry today that for all intents and purposes the Cloud is the Internet. Private cloud? Sure if you want to call a company’s internal infrastructure a “cloud” go ahead. To me it’s still a private network which has better descriptors like, “local area network” or “hosted environment”. Forget “Private Cloud”, it’s just confusing.

Are there advantages to being in the cloud?

Of course there are! There are some great services out there on the Internet (sorry Cloud) which have some real advantages. A few clicks of the mouse, a credit card and voila! You have email, file storage, telephony and the list goes on. Some companies offer a single service and do it really well (think Dropbox), while others offer a suite of products and do most of it well (think google apps or Office 365). The point is that for a relatively low startup fee you can be up and running with the services you need to run your business and your life. Right now!

Just think. No more hardware and service quotes from multitudes of vendors. No server rooms and dusty closets. Backups and disaster recovery are a thing of the past. No big capital costs. Finally after all these years we get peace of mind and a sound night sleep. Right?

You have help

Your IT person or company is your trusted advisor. They can help you decide on a plan that fits your requirements. They can ask you the questions or point out concerns that you may not have considered. They can work with you on the true cost and benefit analysis of moving to the Cloud.


Client Onboarding Process

Nice little Client Onboarding infographic we had Craig put together for one of Gordon’s recent presentations.

An image of Risolv's onboarding process.
Click for full-sized image.

IT Service Management (ITSM)

Another little infographic for ITSM we had Craig put together for one of Gordon’s recent presentations.

An image of Risolv's process for service management.


Factory Reset Juniper SSG5 with Pinhole and no Console Cable

Instructions to factory wipe without console cable (Pinhole reset only) using status lights to track progress.

  1. Elapsed time 0s: Insert press reset and hold for 6 seconds
    • Both power and status will light/flash amber
    • The status light will go back to green after about 6 seconds, release of pin at that time
  2. Elapsed time 5 to 6s: Wait 2 to 3 seconds
  3. Elapsed time 8 to 9s: Insert pin a second time and hold for 6 seconds
    • If timing is proper, status will flash red instead of amber.
    • Holding too longer or too short cancels reset
    • If both power and status flash amber at same time you missed timing, wait a few seconds and start over from step 1\
  4. Elapsed time 14 to 15s: On release, power should be green, status will switch from red to orange to green fairly quickly
  5. Connect LAN cable to port 0/2
  6. Assign manual ip to NIC range 192.168.1.0/24
  7. Navigate to http://192.168.1.1
    • Username: netscreen
    • Password: netscreen

Recover EFS Encrypted File in Windows Domain Environment

(Scroll down for step-by-step to skip intro)

Background
Things to keep in mind: EFS can easily be enabled without the end users realizing it. EFS flag will stick when sent between windows systems even if the domains or networks are completely independent (so Machine A has EFS on for entire disk, user sends file to Machine XYZ, EFS is still on). Thirdly, Windows will try to enable EFS at the folder level by default when a user is enabling this feature for one file.

The only redeeming factor for managing EFS is that the DRA (Data Recovery Agent) in a Windows Domain environment is, by default, the Built-in Domain Administrator. To confirm who the DRA is on a file you can use cipher.exe (part of system for Vista/Windows 7/2008, it replaces efsinfo.exe from previous OS) like this:

cipher /c filename.ext

This will list the user and thumbprint for the private cert, as well as the user thumbprint for the data recovery cert. The good news is that in a domain environment, the data recovery cert is, by default (even if the Group Policy is not configured) stored on the Primary Domain Controller, and is set to the DOMAIN\Administrator account.

The second piece of info you need to keep in mind is that you cannot recover a file over the network. You have to have the recovery cert installed in the Private store of the machine hosting the files you are trying to recover.

Step-by-step Recovery:

  1. Run CMD on machine hosting the encrypted(EFS) file
  2. CD to the directory of the file
  3. cipher /c filename to get the DRA user and Thumbprint (to confirm)
  4. Remote Desktop to the Primary DC, important: log in as Administrator (this has to the be the built-in domain\administrator account, another domain admin will NOT work)
  5. Open MMC
  6. Add Certificates snap-in for the local user account
  7. Navigate to the Private store tree, you should see a few certs there
  8. Scroll the window to the right, look at the Intent column, you are looking for the cert with “File Recovery” listed under intent. (NOTE: You can look at the thumbprint, it will match from step 3)
  9. Right-click and Export the cert (set password etc.. note export location…)
  10. Now copy the resulting .PFX to the server hosting the EFS file
  11. Back on the server hosting the EFS file, double-click to import the cert, place it in the Private store

That’s the bulk of it. Using Windows Explorer, you can now navigate to the folder containing the EFS files and do the following:

  1. Take ownership (Right-click – Properties, Security, Advanced, Owner)
  2. Remove EFS check mark (Right-click – Properties, General – Advanced, Encrypt)

NOTE: You can use cipher.exe /s:foldername recursively list all files and folders for the folder name in question. Nothing pretty though so you may want to filter ” | find ‘thumb’


Internet Explorer – Unable to Continue to Site due to Invalid Certificate

This only applies Internet Explorer users.

In some scenarios, users trying to access sites with invalid certificates (self-signed being the most common in my case) will NOT have the option to continue to the site.

Thankfully this security is more of a “security by obscurity” rather than something this is actually coded into the software. If you right-click on the page in question and look at the properties, you will see the address is like this:

Workaround
res://ieframe.dll/invalidcert.htm?SSLError=16777216&PreventIgnoreCertErrors=1#https://some-site-url

The workaround as you can probably guess is to change the PreventIgnoreCertErrors=1 to PreventIgnoreCertErrors=0 keeping the #https:// url portion intact.

You can do this by copying the URL above and replacing the some-site-url with the actual site you are trying to reach and pasting that link into IE’s address bar.

You will still get the cert error, but you will now have the option to continue to the site in question. If you then follow-up with the usual method to get around self-signed certs by adding site to trusted locations and installing the cert, you won’t have to use this workaround for that site again.

UPDATE:
This technet post has information for the actual fix.

As per TechNet article, you may still need to reduce the RSA Min Public Key Length and potentially allow weak signatures. Obviously this has the potential to expose you to weakly encrypted traffic (just for those weak sites, it doesn’t impact site with proper certs in place).

  • Certutil -setreg chain\minRSAPubKeyBitLength 512
  • Certutil -setreg chain\EnableWeakSignatureFlags 2

DNS Recursion Map

DNS is a topic that comes up fairly often in the office. I was looking to have something basic to outline (generally) the steps that take place to get your client application to connect with the host name you are trying to access.

Since I found most diagrams that came up with Google Images confusing(I’m simple..), we put together this map here. Of course there’s a little more to DNS recursion than this, but I find the map itself does a fairly good job of detailing the process.

A mindmap image of DNS recursion
Click for full-sized image
  1. The color-coding is meant to help identify which server is “asking” and what server is providing the response. The steps are as follows:
    Client computer tries to resolve http://google.com the request is sent to the Preferred DNS Server, in most scenarios this will be what your ISP provided to you.
  2. The Preferred DNS Server will actually handle most of the workload in this case(hence the recursion). The Preferred DNS Server will first check to see if it is the Authoritative Host for the Zone google.com(as in, if it hosts the zone google.com); if it is, then it sends the client computer back the IP address for the host record needed.If the Preferred DNS Server is not Authoritative for the zone in question, and if it has DNS Caching enabled, it will check its Cache to see if it has recently resolved that hostname, and if so, it will send the IP address in its cache back to the client. Cached records have a TTL and when this TTL expires, the Preferred DNS Server will in fact go through with the DNS recursion process; which will cause that record to be updated again. (See DNS Propagation)Cached records alleviate the load by skipping the need to resolve a host name for every single request.If the
  3. Preferred DNS Server does not have a cached record for the request, or the cached record has expired, it will then send a request to the Root DNS servers.
  4. The Root DNS server will then respond to the Preferred DNS Server with an address for the Top Level Domain containing the next bit of information needed for this process; .COM was not the easiest to use in this example but TLDs are organized by Country Codes (.CA,.BM,.UK), Generic (.COM, .NET, .ORG) and Sponsored (.travel, .info).
  5. Once the Preferred DNS Server has the TLD’s address, it will request from the TLD the the actual Name Server records for the zone. The Name Servers are the ones hosting the actual DNS Zone for google.com
  6. Once the Preferred DNS Server has the NS address back from the TLD, it can query the actual Primary NS (there are usually more than one NS returned) for the record needed, in this case the Address record for google.com
  7. The Preferred DNS Server will then refresh it’s DNS cache (if it has caching enabled) as well as provide the client with the IP address associated with google.com
  8. To the client application this is transparent, the client application would simple proceed with whatever requests it is trying to perform on that host, but now that the IP address is resolved, the data/requests would route to the appropriate host.