We started our production deployment here on May 12, and all in all I am pretty happy. There have been a couple "gotchas", but after about a year of working on this thing I have to say it has went very smoothly. One of our biggest issues here is bandwidth. Going from XP to Vista our deployment has, in most instances, trippled in size. With XP we were about 10G in size, and with Vista we are at about 30G. The problem with that is that our network infrastructure is inconsistent at best. There is nothing I can do about that and so that's the limitation that I have to deal with.
The first deployment that we did we did on an 8-port (that's what we had at the time) Gbit switch. That allowed us to do six machines at a time. Two ports were used by the net connection for windows updates and the nas for building from. Those six machines took three hours. Next we put twelve machines on a 100Mb switch. Those machines took four and a half hours. That evening we plugged the nas into the wall and started the rest of our machines (about 25). Sorry I don't have exact figures on this, but by the next morning several machines failed asking for the CD, I think 3 just failed with some deployment error, and 4-6 were still building.
So after looking at this I tried to think of different things to try to get things done faster. First off, I have to say that I am pretty happy with the loading on our nas. I don't know how well it should load, but I know that we were never able to load our old server pulling down ghost images like we are able to do on the nas (If we got more than 5 or 6 ghost sessions going some would start to fail). We are using a Qnap TS-409 Pro nas. One of the things that I thought about doing is copying the application install files down to the local machine and then have the task sequence fire them off the local hdd. Currently the installs are fired off the server/nas. I don't know if that would work in our environment because until the past year we always got the smallest hdd we could from the vendor because we did not really need any bigger drives. I also thought about other temporary network topologies, but nothing really stands out as a solution. So in the end we just opted for a 24-port Gbit switch and let em grind.
The biggest time hog for deployment/installing is Visual Studio 2008. The old version took forever to install. I think this one takes twice as long to install! This install just drives me insane. I have sat and watched the resource monitor for this install and it is just slow. First off the it looks like msiexec.exe needs to be ported over to being a multithreaded app. For most all of the VS install the processor is pegged right at 50%. (I've tried looking to see if there is any documentation on Windows Installer being either single threaded or multithreaded and I could not find anything, so I am assuming it is single threaded.) Office 2007 and CS3 take some time too, but they seem to be a fair amount better about dealing with network/cpu/hdd/memory usage for their installs. VS just seems to have a one track mind though. It is either using network or cpu or hdd, but when one spikes all the others fall off.
As for any errors that I've seen the most common one is a program give me an exit code of 13 and I cannot find a way for the deployment wizzard to accept that return code as ok. Other machines have failed up doing windows updates, which to me is not a show stopper. They will get their updates soon enough. One or two errors happened that caused us to just wipe the drive and start over. I don't remember the exact errors, but they were pretty odd ones about installing Vista and wiping the hdd did the trick.
Wednesday, May 28, 2008
Thursday, May 8, 2008
Quick and dirty "spare" deployment point
In our environment we have some "problem" labs that we have to deploy Vista into. For one reason or another the network will not really handle deploying several machines at a time. Trying to remedy this solution I got us a nas to work off of. This presents some interesting challenges.
The deployment tool isn't really setup to handle a dual server type setup very well (if at all). I might be missing something that would make this really easy, but I don't think so. Other blogs I have read talk about dns round-robbin and load balancing and the like, but none of those scenarios really address the problems that I face -- slow connections to the server room, either because of a wan-link or a flakey network. At any rate I got us a small nas so that I would have a local resource we could build from in any lab.
What I did was create shares on the nas that are named the same as the ones that are on our primary server. I use two different shares for the deployment and the applications. I then copied all the files over from each share to the corresponding one I made on the nas. After doing this I had to modify a few things to point to the nas.
One of the first things that I had to modify was the bootstrap.ini file to point to my nas -- but that was easier said than done. There might be an easier way to do it than what I put here, but this is what I did. I took the LTI iso and mounted it so I could read/write to it. I then had to open the boot.wim in the Sources folder that is in the iso (for the time being I'm going to leave it to your google-foo to figure that out -- if anyone asks I'll write that up later). I edited the bootstrap.ini in there and saved all of it back out and burned a boot cd.
From there I had to modify serveral files on the nas to point to the nas:
\Control\Applications.xml
and
\Control\*\TS.xml
* is the task sequence number of any task sequence you might use from the nas
I did all of this outside of the deployment tool because I did not want to make two deployment points and I didn't want to duplicate task sequences and have to edit both of them for each change I make (well I have to edit it anyway, but I would rather use a txt editor to do a search and replace rather than using the task sequence GUI).
That's my quick and dirty way to use another server to deploy from. Good luck.
The deployment tool isn't really setup to handle a dual server type setup very well (if at all). I might be missing something that would make this really easy, but I don't think so. Other blogs I have read talk about dns round-robbin and load balancing and the like, but none of those scenarios really address the problems that I face -- slow connections to the server room, either because of a wan-link or a flakey network. At any rate I got us a small nas so that I would have a local resource we could build from in any lab.
What I did was create shares on the nas that are named the same as the ones that are on our primary server. I use two different shares for the deployment and the applications. I then copied all the files over from each share to the corresponding one I made on the nas. After doing this I had to modify a few things to point to the nas.
One of the first things that I had to modify was the bootstrap.ini file to point to my nas -- but that was easier said than done. There might be an easier way to do it than what I put here, but this is what I did. I took the LTI iso and mounted it so I could read/write to it. I then had to open the boot.wim in the Sources folder that is in the iso (for the time being I'm going to leave it to your google-foo to figure that out -- if anyone asks I'll write that up later). I edited the bootstrap.ini in there and saved all of it back out and burned a boot cd.
From there I had to modify serveral files on the nas to point to the nas:
\Control\Applications.xml
and
\Control\*\TS.xml
* is the task sequence number of any task sequence you might use from the nas
I did all of this outside of the deployment tool because I did not want to make two deployment points and I didn't want to duplicate task sequences and have to edit both of them for each change I make (well I have to edit it anyway, but I would rather use a txt editor to do a search and replace rather than using the task sequence GUI).
That's my quick and dirty way to use another server to deploy from. Good luck.
Friday, April 25, 2008
Using Robocopy
In part of my deployment I use robocopy to copy over a batch file for later use by the system. I am now to the point in my deployment where I am working on the "little" things and getting the kinks out of them. After my task sequence ran robocopy it would fail because robocopy returned an exit code of 1. After doing a bit of digging I found that means everything worked out just fine. So if you are using robocopy in any of your task sequences you will want to find that/those step(s) and go to the options tab and edit a few things. I put a check in front of "Continue on error" so that copying over one file is not going to derail my deployment. I also added in a 1 in the "Success codes:" box. I have not ever touched this box before, but I have faith that is going to solve my problem with robocopy. Be aware that you may have to put other robocopy "success" codes in to make your particular deployment not bark at you.
Friday, March 28, 2008
Booting off USB
Found a little note in the MSDT help files on booting off USB. I had a couple little gotchas when doing this, and I'll let you know what they are. One of the gotchas is in their instructions so I'll give you my instructions (this wipes your flash drive, just so you know).
1 Open admin cmd prompt.
2 Run "diskpart".
3 Run diskpart command "list disk".
4 Make note of the disk number your flash drive is.
5 Run diskpart command "select disk (drive number)".
6 Run diskpart command "clean".
7 Run diskpart command "create partition primary".
8 Run diskpart command "select partition 1".
9 Run diskpart command "active".
10 Run diskpart command "format fs=fat32".
11 Run diskpart command "assign"
12 Run diskpart command "exit"
Then the instructions tell you to use xcopy to copy the contents of your boot CD to the flash drive. Xcopy would not work for me. I just copied the contents of the CD over to the flash drive and it seems to work just fine.
Remember to disconnect your flash drive before your machine needs to reboot or your sequence will fail.
1 Open admin cmd prompt.
2 Run "diskpart".
3 Run diskpart command "list disk".
4 Make note of the disk number your flash drive is.
5 Run diskpart command "select disk (drive number)".
6 Run diskpart command "clean".
7 Run diskpart command "create partition primary".
8 Run diskpart command "select partition 1".
9 Run diskpart command "active".
10 Run diskpart command "format fs=fat32".
11 Run diskpart command "assign"
12 Run diskpart command "exit"
Then the instructions tell you to use xcopy to copy the contents of your boot CD to the flash drive. Xcopy would not work for me. I just copied the contents of the CD over to the flash drive and it seems to work just fine.
Remember to disconnect your flash drive before your machine needs to reboot or your sequence will fail.
Custom script move OU failing with 0x8007052E error.
After upgrading and messing around with some new hardware (looking at a NAS for a mobile build server) I started getting errors with the custom OU moving scripts that I have been using for a while. I copied my deployment point over to the NAS I'm looking at and made the changes needed in the bootstrap.ini and applications.xml files. I tried my deployment off the NAS and when it got to the Z-MoveComputer_StagingOU.wsf script it failed with an error code of 0x8007052E (-2 something dec -- sorry didn't write it down). After doing a fair amount of trouble shooting I found out that off the NAS it did not like the following line in the code (probably 2/3 of the way down):
Set objContainer= openDS.OpenDSObject("LDAP://" & strDC & "/" & strStagingOU, strAccounttoJoinWith & "@" & strDomain, strAccountPassword, ADS_SECURE_AUTHENTICATION)
In particular I had to make admin@domain into doman\admin in order for the script to function. If you are seeing this error then see if swapping that helps you. The NAS is not on the domain and our other deployment server is. I don't know why this would need to change on one vs. the other, but I don't make the rules, I just play the game.
Set objContainer= openDS.OpenDSObject("LDAP://" & strDC & "/" & strStagingOU, strAccounttoJoinWith & "@" & strDomain, strAccountPassword, ADS_SECURE_AUTHENTICATION)
In particular I had to make admin@domain into doman\admin in order for the script to function. If you are seeing this error then see if swapping that helps you. The NAS is not on the domain and our other deployment server is. I don't know why this would need to change on one vs. the other, but I don't make the rules, I just play the game.
Thursday, February 28, 2008
Custom script for automating app install by machine name.
Phew what a title! Ok so it took me about a week to get this little script functioning. The reason that I came up with this script is that I really could not find a way to have automated application installs by machine name that fit into our "ecosystem". The old way we did application installs in XP and before was by a vbscript. I'm not opposed to the old way, but the old way that we did it had no error checking. What I really wanted to do was come up with something that would tell me what application failed to install if one or more did fail. Since this is already handled in MSDT I really wanted to leverage that.
I don't claim to be a programmer. I'm a hardware guy. I'm sure that someone could come up with more eloquent ways of putting this script together (and if you do please let me know), but what I have works for me. In our environment we have labs that need a basic set of software installed and then they have different sets depending on what lab that is. We have tried as best as we can to standardize our machine names, and that helps a lot. Thus, one of the things this script does is strip out the lab name from the machine name (ie BB100-01 is lab BB100 machine number 01).
The script then looks at a file I've called sourcetxt.txt to see what software needs to get installed where. Because of my ability with scripting everything has to be jumbled up together making it hard to read, but if the data is not in the correct format then the script dies. The data in this file takes the form of:
LabID=All:AppID=keyaccess
LabID can be either All or the name of the lab (BB100 from above). The : is the field delimiter. The AppID is the "name" of the application to be installed. I say it like that because the name is actually a file with that exact name + .txt that has the GUID in it for the application. It can have multiple GUIDs in it. I did this because we key our apps and thus I have the actual app install and then I have the keyed exe to install over it (If I ever get the time to devote to learning key server I could learn to deputize installers, but that's a whole other topic right there).
Included with the zip file are three files: \Z-CUSTOM_LabApps.wsf, \Applications\sourcetxt.txt, and \Applications\winscp.txt
I put my script in the scripts folder and I made a subdir called Applications in that folder to house the two files that references.
After the script strips the lab name out, reads the apps required in the lab and then gets the associated GUIDs of those apps it then writes those values back into the VARIABLES.DAT file. This then allows the MSDT framework to handle the app install and report back any app install errors. If you have applications specified in your CustomSettings.ini file then you will have to modify the script to start with the number that you left off with.
I created my own custom task sequence to run this script. I run it during the Preinstall\New Computer Only section after the "Copy scripts" built-in task runs.
Hope it helps!
Download the LabApps.zip here.
I don't claim to be a programmer. I'm a hardware guy. I'm sure that someone could come up with more eloquent ways of putting this script together (and if you do please let me know), but what I have works for me. In our environment we have labs that need a basic set of software installed and then they have different sets depending on what lab that is. We have tried as best as we can to standardize our machine names, and that helps a lot. Thus, one of the things this script does is strip out the lab name from the machine name (ie BB100-01 is lab BB100 machine number 01).
The script then looks at a file I've called sourcetxt.txt to see what software needs to get installed where. Because of my ability with scripting everything has to be jumbled up together making it hard to read, but if the data is not in the correct format then the script dies. The data in this file takes the form of:
LabID=All:AppID=keyaccess
LabID can be either All or the name of the lab (BB100 from above). The : is the field delimiter. The AppID is the "name" of the application to be installed. I say it like that because the name is actually a file with that exact name + .txt that has the GUID in it for the application. It can have multiple GUIDs in it. I did this because we key our apps and thus I have the actual app install and then I have the keyed exe to install over it (If I ever get the time to devote to learning key server I could learn to deputize installers, but that's a whole other topic right there).
Included with the zip file are three files: \Z-CUSTOM_LabApps.wsf, \Applications\sourcetxt.txt, and \Applications\winscp.txt
I put my script in the scripts folder and I made a subdir called Applications in that folder to house the two files that references.
After the script strips the lab name out, reads the apps required in the lab and then gets the associated GUIDs of those apps it then writes those values back into the VARIABLES.DAT file. This then allows the MSDT framework to handle the app install and report back any app install errors. If you have applications specified in your CustomSettings.ini file then you will have to modify the script to start with the number that you left off with.
I created my own custom task sequence to run this script. I run it during the Preinstall\New Computer Only section after the "Copy scripts" built-in task runs.
Hope it helps!
Download the LabApps.zip here.
Friday, February 22, 2008
Sidebar crashing when installing applications
For whatever reason the sidebar seems to like to crash in the middle of application installations. It does not seem to matter what applications (but CS3 seems to be the worst one) I pick to install it seems to die along the way. It's kinda scary seeing those pop-up boxes coming up asking if you want to debug. So I added the following task sequence to kill the sidebar right before installing apps.
taskkill /F /IM sidebar.exe /T
Custom automated apps script coming.
I think I've came up with my own customs script for automating the application installs. I'm testing now and the script hasn't failed. I just have to now see if MS' programmers are better than I am and the VARIABLES.DAT file can handle spaces and line returns (my script chokes on spaces and line returns). If it can't then I will have to work on my script some more.
Thursday, January 3, 2008
Move OU scripts
Ok I am going to try to get the ball rolling on the scripts for moving your machines around in the OUs.
Fisrt off you will need to get the scripts from the following post:
BDD 2007 - How to move a computer object in Windows PE
The scripts are kinda hidden right before the comments section. Taking a look at that blog post though ought to keep you busy for a bit.
Because I am using a LTI approach I followed the link in the middle of Ben Hunter's post to Johan Arwidmark file. The following is Mr. Johan Arwidmark's complete set of instructions for getting the ADSI to work in a LTI:
These instructions caused problems for me. At first I had problems just getting the instructions to work. Make sure you pay attention to where you installed things and where to you put all your files and plugins and all that good stuff. I ended up making a batch file so I would not have to type those commands over and over again at the command prompt. You need to search and find the imagex.exe and the peimg.exe to use them. After some tweaking to my bat file I got the above instructions to work for me.
{UPDATED since I got feedback from Mr. Ben Hunter.
I don't think I was being clear on what was happening when I responded to Mr. Hunter's blog post. As you can see below in my comments Mr, Hunter has confirmed that step 4 of Mr. Johan Arwidmark's instructions above does need to be left out for MDT to "compile" your WinPE image properly. So you can skip the next paragraph, but I will leave it in so that when you read Mr. Hunter's comment you won't wonder what he's on about.}
After getting that to work I headed back to Ben Hunter's post to march onward. Up until the point where I had to update my deployment point everything went as planned. I did an "Update" [not "Update (files only)"]. In the middle of this MDT (was BDD) would crap out and give me an error. I found the error log for MDT and found out that my install of MDT was already doing step 4 in Mr. Johan Arwidmark's instructions for me. I commented on this to Mr. Ben Hunter on his blog and he did not believe this was possible. Now I believe that Mr. Ben Hunter knows way more about MDT than I do or I ever will. I can only report though what is happening to me. My MDT is, by default, adding in the scripting support that Mr. Ben Hunter says it should not be doing. When MDT would get to the point that it was injecting scripting support into the WinPE image it would throw an error and WinPE creation would fail. If this is happening to you then I would suggest that you not use step 4 in Mr. Johan Arwidmark's instructions.
After clearing that hurdle I ran smack into the next one, which happened to be a brick wall. The next set of problems I ran into cost me about a week. I hope that I will be able to put enough troubleshooting info in this post to get anyone past the errors that I ran into.
I got my PE disc made. I started my LTI process. It pulled down my image. It did a few more thing and then died. I got the got the pink screen of your build is toast. Looking at the details I got the following error:
ZTIERROR - Unhandled error returned by Z-MoveComputer_StagingOU: Table does not exist. (-2147217865)
There are several things that you will need to do to get past this error. First off take a look at the script. You will find that the script starts off by copying global variables into local variables. The first thing that I did was hard code in the script the things that I wanted in those local variables (ie my username, my passwd, the domain controller, and so on). I then started adding in a lot of logging entries so that I could see at what point the script was barfing at. Here is an example of one of the entries that I added:
oLogging.CreateEntry "7VARS:" &strStagingOU&"_"&strComputer&"_"&strAccounttoJoinWith&"_"&"passwd"&"_"&strDomain&"_"&strDC, LogTypeInfo
This helped a lot in tracking down where something had to change to make things work. I put entries like that all over the script.
In case you need help getting started on the troubleshooting of that script, here's what you need to do. You have the pink screen of build death up. Just move that down and out of your way. In the command prompt that is open (there should be 2 -- find the one with the prompt that you can actually type thing in on) you need to open 2 notepads and then connect to your deployment share (net use z: \\SERVER\SHARE$). Go to z:\scripts. Use this command to get started troubleshooting:
cscript Z-MoveComputer_StagingOU.wsf /debug:true
{UPDATED since I got feedback from Mr. Ben Hunter.
It appears that I am having some weirdness with my TS. I had to remove them all and readd them in order for a couple issues to clear up -- one of them being the missing variables getting initialized.}
A current issue that I am dealing with on this script is that MS decided to change a global variable name from one release to the next. To figure that out open the script with one of your notepad windows. With the other notepad window you will need to open the variables.dat (c:\minint\smsosd\osdlogs\) file. I have found that the global var "DomainAdmin" has now been changed to "USERID" and "DomainAdminPassword" has now been changed to "USERPASSWORD". I had to update my script to point to this change. Of course the problem is that my old task sequence still has the old global var name and so I have to totally redo my old TS to make sure that any new TSes that I do work with the move OU scripts.
{UPDATED since I got feedback from Mr. Ben Hunter.
Thanks to Mr. Hunter again on the CN vs OU comment. I know that I was using OU=Computers not CN=Computers. Making the new OU that I called VistaTest worked for me using OU=VistaTest. I don't claim to be an AD guru, but I think I can justify to myself why OU=Computers didn't work. I don't know if I'm correct on that so I will not type out my justification. :) }
Another problem that I encountered with this script is that my domain controller would not allow the scripts to move machines into the generic "computers" OU. When I created a new OU and told the scripts to use that OU I stopped getting errors. I could move the machines into that OU using Users and Computers so I don't know why the script was not allowed. Just something to keep in mind.
A note on the scripts themselves. I was not familiar with what the scripts were actually doing so it seemed to me that they were doing things in reverse. That is by design. The scripts connect to the OU that you want the machine moved into and then it says move the computer here. That threw me off. So if you are looking at the script and thinking "why is it doing it that way, that's backward," then that's probably why.
As far as I can recall that covers the errors that I saw and what I've had to do to fix them.
Fisrt off you will need to get the scripts from the following post:
BDD 2007 - How to move a computer object in Windows PE
The scripts are kinda hidden right before the comments section. Taking a look at that blog post though ought to keep you busy for a bit.
Because I am using a LTI approach I followed the link in the middle of Ben Hunter's post to Johan Arwidmark file. The following is Mr. Johan Arwidmark's complete set of instructions for getting the ADSI to work in a LTI:
Instructions
1. Download the Plugin from http://www.deployvista.com/Repository/WindowsPE20/tabid/73/Default.aspx and extract to C:\Plugins\ADSI
2. Copy the following files from a Windows Vista to C:\Plugins\ADSI
adsldp.dll
adsnt.dll
mscoree.dll
mscorier.dll
mscories.dll
3. Using ImageX, mount your WinPE image (winpe.wim)
Syntax: ImageX /mountrw winpe.wim 1 c:\mount
4. Using PEImg, add support for Windows Scripting Host (For the sample script)
Syntax: PEImg /install=*Scripting* c:\mount\windows
5. Using PEImg, inject the plugin
Syntax: PEImg /inf:C:\Plugins\ADSI\ADSI.inf c:\mount
6. Using ImageX, commit the changes
Syntax: Imagex /unmount /commit c:\mount
Note: A sample script that connects to a DC and lists the users container is provided as well... the plugin will not inject this sample script by default.
These instructions caused problems for me. At first I had problems just getting the instructions to work. Make sure you pay attention to where you installed things and where to you put all your files and plugins and all that good stuff. I ended up making a batch file so I would not have to type those commands over and over again at the command prompt. You need to search and find the imagex.exe and the peimg.exe to use them. After some tweaking to my bat file I got the above instructions to work for me.
{UPDATED since I got feedback from Mr. Ben Hunter.
I don't think I was being clear on what was happening when I responded to Mr. Hunter's blog post. As you can see below in my comments Mr, Hunter has confirmed that step 4 of Mr. Johan Arwidmark's instructions above does need to be left out for MDT to "compile" your WinPE image properly. So you can skip the next paragraph, but I will leave it in so that when you read Mr. Hunter's comment you won't wonder what he's on about.}
After getting that to work I headed back to Ben Hunter's post to march onward. Up until the point where I had to update my deployment point everything went as planned. I did an "Update" [not "Update (files only)"]. In the middle of this MDT (was BDD) would crap out and give me an error. I found the error log for MDT and found out that my install of MDT was already doing step 4 in Mr. Johan Arwidmark's instructions for me. I commented on this to Mr. Ben Hunter on his blog and he did not believe this was possible. Now I believe that Mr. Ben Hunter knows way more about MDT than I do or I ever will. I can only report though what is happening to me. My MDT is, by default, adding in the scripting support that Mr. Ben Hunter says it should not be doing. When MDT would get to the point that it was injecting scripting support into the WinPE image it would throw an error and WinPE creation would fail. If this is happening to you then I would suggest that you not use step 4 in Mr. Johan Arwidmark's instructions.
After clearing that hurdle I ran smack into the next one, which happened to be a brick wall. The next set of problems I ran into cost me about a week. I hope that I will be able to put enough troubleshooting info in this post to get anyone past the errors that I ran into.
I got my PE disc made. I started my LTI process. It pulled down my image. It did a few more thing and then died. I got the got the pink screen of your build is toast. Looking at the details I got the following error:
ZTIERROR - Unhandled error returned by Z-MoveComputer_StagingOU: Table does not exist. (-2147217865)
There are several things that you will need to do to get past this error. First off take a look at the script. You will find that the script starts off by copying global variables into local variables. The first thing that I did was hard code in the script the things that I wanted in those local variables (ie my username, my passwd, the domain controller, and so on). I then started adding in a lot of logging entries so that I could see at what point the script was barfing at. Here is an example of one of the entries that I added:
oLogging.CreateEntry "7VARS:" &strStagingOU&"_"&strComputer&"_"&strAccounttoJoinWith&"_"&"passwd"&"_"&strDomain&"_"&strDC, LogTypeInfo
This helped a lot in tracking down where something had to change to make things work. I put entries like that all over the script.
In case you need help getting started on the troubleshooting of that script, here's what you need to do. You have the pink screen of build death up. Just move that down and out of your way. In the command prompt that is open (there should be 2 -- find the one with the prompt that you can actually type thing in on) you need to open 2 notepads and then connect to your deployment share (net use z: \\SERVER\SHARE$). Go to z:\scripts. Use this command to get started troubleshooting:
cscript Z-MoveComputer_StagingOU.wsf /debug:true
{UPDATED since I got feedback from Mr. Ben Hunter.
It appears that I am having some weirdness with my TS. I had to remove them all and readd them in order for a couple issues to clear up -- one of them being the missing variables getting initialized.}
A current issue that I am dealing with on this script is that MS decided to change a global variable name from one release to the next. To figure that out open the script with one of your notepad windows. With the other notepad window you will need to open the variables.dat (c:\minint\smsosd\osdlogs\) file. I have found that the global var "DomainAdmin" has now been changed to "USERID" and "DomainAdminPassword" has now been changed to "USERPASSWORD". I had to update my script to point to this change. Of course the problem is that my old task sequence still has the old global var name and so I have to totally redo my old TS to make sure that any new TSes that I do work with the move OU scripts.
{UPDATED since I got feedback from Mr. Ben Hunter.
Thanks to Mr. Hunter again on the CN vs OU comment. I know that I was using OU=Computers not CN=Computers. Making the new OU that I called VistaTest worked for me using OU=VistaTest. I don't claim to be an AD guru, but I think I can justify to myself why OU=Computers didn't work. I don't know if I'm correct on that so I will not type out my justification. :) }
Another problem that I encountered with this script is that my domain controller would not allow the scripts to move machines into the generic "computers" OU. When I created a new OU and told the scripts to use that OU I stopped getting errors. I could move the machines into that OU using Users and Computers so I don't know why the script was not allowed. Just something to keep in mind.
A note on the scripts themselves. I was not familiar with what the scripts were actually doing so it seemed to me that they were doing things in reverse. That is by design. The scripts connect to the OU that you want the machine moved into and then it says move the computer here. That threw me off. So if you are looking at the script and thinking "why is it doing it that way, that's backward," then that's probably why.
As far as I can recall that covers the errors that I saw and what I've had to do to fix them.
Desktop resolution problems
A couple posts ago I talked about the problems I had with my test deployment due to the desktop resolution. To refresh your memory Adobe CS3 requires that the desktop resolution be set to at least 1024x768 or it will not install. If getting errors deploying CS3 make sure it meets all system requirements -- the installer checks!
When our builds started they were at the resolution of 1024x768, but due to a driver update from windows update the resolution was getting reset to 800x600. I assume this is the default resolution that the driver is set to. So after doing a bit of searching I ran across Resolution Changer. This program is able to run from command line to change the resolution of your desktop.
In order to solve my problem I added a task sequence between doing the pre-applications windows updates and installing applications. I added the task as a run command line task. I put the little tool on my server and I run it from there. On the advice of one of the MSDN bloggers I named my custom task "CUSTOM - reschanger" (adding the CUSTOM to the front makes my extra steps a lot easier to find).
For the Command line:
\\SERVER\SHARE$\tools\reschange.exe -width=1024 -height=768 -depth=32 -refresh=60 -force -quiet
(All one line, no line breaks.)
For the Start in:
\\SERVER\SHARE$\tools
(I don't think this is really needed, but I put it in there just in case.)
I ran into a couple issues with this when I first started. At first I just put in the exe and the width and height switches. That gave me errors. Not really in the mood to troubleshoot the errors I added in the command line that you see above and all worked out just fine.
Just running the program by itself gives you a pop-up-box of the switches and what they do.
You can download the program from:
http://www.12noon.com/reschange.htm
When our builds started they were at the resolution of 1024x768, but due to a driver update from windows update the resolution was getting reset to 800x600. I assume this is the default resolution that the driver is set to. So after doing a bit of searching I ran across Resolution Changer. This program is able to run from command line to change the resolution of your desktop.
In order to solve my problem I added a task sequence between doing the pre-applications windows updates and installing applications. I added the task as a run command line task. I put the little tool on my server and I run it from there. On the advice of one of the MSDN bloggers I named my custom task "CUSTOM - reschanger" (adding the CUSTOM to the front makes my extra steps a lot easier to find).
For the Command line:
\\SERVER\SHARE$\tools\reschange.exe -width=1024 -height=768 -depth=32 -refresh=60 -force -quiet
(All one line, no line breaks.)
For the Start in:
\\SERVER\SHARE$\tools
(I don't think this is really needed, but I put it in there just in case.)
I ran into a couple issues with this when I first started. At first I just put in the exe and the width and height switches. That gave me errors. Not really in the mood to troubleshoot the errors I added in the command line that you see above and all worked out just fine.
Just running the program by itself gives you a pop-up-box of the switches and what they do.
You can download the program from:
http://www.12noon.com/reschange.htm
Updating MS Deployment
{UPDATED since I got feedback from Mr. Ben Hunter.
I was wrong on all of this. If you are seeing an issue like the one that I saw below as far as I can tell something is corrupted in your Task Sequences. I deleted all mine (I had two, one prod and one test) and readded them and everything is working just fine now.}
So I ran into another little issue when I updated MS Deployment (was BDD). So it would appear that internally MS decided to change some of the variable names. I am using a script that moves my machines to a temp OU and then back to the original OU after everything runs. Well I turned my stable test task sequence (TS) into my production run. I then branched off that and started doing some more testing of little things (I can't even remember now what). When it got to my custom scripts of the OU stuff it would fail. Since I've spent weeks messing with that script I got fairly good at trouble shooting it (yes that is still going to be put up here at some point). As it turns out MS decided that they should change a variable name from (I think) "DomainAdmin" to "UserName". It would appear that since I imported/converted my old TS it was using the old variable names and so it kept working with my scripts. However when I tried making a new TS it would fail.
Ahhh, the lovely little things that give me job security.
I was wrong on all of this. If you are seeing an issue like the one that I saw below as far as I can tell something is corrupted in your Task Sequences. I deleted all mine (I had two, one prod and one test) and readded them and everything is working just fine now.}
So I ran into another little issue when I updated MS Deployment (was BDD). So it would appear that internally MS decided to change some of the variable names. I am using a script that moves my machines to a temp OU and then back to the original OU after everything runs. Well I turned my stable test task sequence (TS) into my production run. I then branched off that and started doing some more testing of little things (I can't even remember now what). When it got to my custom scripts of the OU stuff it would fail. Since I've spent weeks messing with that script I got fairly good at trouble shooting it (yes that is still going to be put up here at some point). As it turns out MS decided that they should change a variable name from (I think) "DomainAdmin" to "UserName". It would appear that since I imported/converted my old TS it was using the old variable names and so it kept working with my scripts. However when I tried making a new TS it would fail.
Ahhh, the lovely little things that give me job security.
Subscribe to:
Posts (Atom)