original author: Peter Charpentier, October 11, 2007
updated by: Drew Gilbert, April 27, 2008
updated by: Lubos Kosco, May 16, 2008
updated by: Lubos Kosco, February 16, 2009
updated by: Lubos Kosco, November 3, 2009
This Tech Note is intended for anyone who needs to open a Service Request for a Ops Center issue with the Sun Support Center.
Ops Center is a Solaris and Linux life cycle management tool that allows you to provision new systems, manage their updates and configuration changes, and eventually redeploy systems for new purposes. You can also inventorize your systems and on supported ones update their firmware. You can subscribe to update and provisioning channels to manage Solaris, Red Hat, and SUSE systems. If you have an active Sun service plan you can access the Solaris 10 Update Channel.
This technical note describes how to collect data that the Sun Support Center requires to debug problems with a Ops Center system. By collecting this data before you open a Service Request, you can substantially reduce the time needed to analyze and resolve a problem.
This technical note covers Ops Center on the Solaris Operating System and Linux platforms. The information applies to all types of environments, including test, pre-production, and production. To reduce performance impact, verbose debugging is used only when necessary. It is possible that the problem might disappear when you configure logging for debug mode. In most cases, the debug data described here is sufficient to analyze the problem.
If your problem does not conveniently fit into any of the specific categories that are presented here, supply the general information described in Collecting Basic Debug Data for Ops Center and clearly explain your problem.
If the information you provide is not sufficient to find the root cause of the problem, the Sun Support Center will ask for more details, as needed.
There are five basic steps for collecting debug data for a Ops Center problem. This document provides information about the first and last Step below.
Collect the basic problem and system information.
Collect the specific problem information (for example, installation logs, output, and other data).
Enable and disable debug modes for the various parts of the product.
Create a tar.gz (tar.z)
file of all the information and upload it to the Sun Support Center.
Create a Service Request with the Sun Support Center.
This section describes the kinds of debug data that you need to provide based on the kind of problem that you are experiencing. Note that the script will detect which OS you are running, but also which parts are installed on a specific machine, such as agent and server software.
sun-gdd-oc-VERSION-collect_noarch.zip
file.Sun-GDD-OC.sh
script.
This gives you access to the actual script that is used to collect data and enable or restore debugging.
Note - Run all scripts in this procedure as root.
# ./Sun-GDD-OC.sh debug
Sun-GDD-OC.sh
script.# ./Sun-GDD-OC.sh restore
Sun-GDD-OC.sh
script is installed.# ./Sun-GDD-OC.sh collect
These scripts create a dump file in /var/tmp
(or designated path) that should
be uploaded to Sun after a Service Request is logged.
Note - These scripts automatically detect if an agent and server are installed, and copies all the files that are found.
# ./Sun-GDD-OC.sh -c 12345678 -n -s collect
In this case, the log data will be collected without prompting for a location with case number 12345678 and the resultant file will be located at:
/var/tmp/12345678-sun-gdd-oc-`hostname`_`date "+%d-%m-%y--%H.%M.%S"`.tar.gz
If internet connection will be available, STB sftransport will be configured, or curl available, script will attempt to send the file to Sun supportfiles, hence saving you the manual upload to Sun# ./Sun-GDD-OC.sh -u ADMIN_USERNAME_OF_OC -n collect
Collection will collect domain data of Ops Center in textual form after the password is provided.
When any debug
or restore
options
are used, the script automatically restarts the relevant processes.
If there is a requirement to delay restarting, this behavior can be
bypassed with the -r
option, for example:
# ./Sun-GDD-OC.sh -r debug
The user will then be prompted to manually restart the agent processes. Note, that you need to know how to restart the appropriate component.
When using the option auto-collect you will go through automated cycle of debug, wait for key press or certain time, collect, restore:
# ./Sun-GDD-OC.sh -t 600 auto-collect
Script will enable debugging and increase logger sizes, it will wait for 600 seconds and then collect logs and restore back to normal mode
Note - The -t
option is to be used only for auto-collect
To get a help screen please run.
# ./Sun-GDD-OC.sh -h
The following debug information is required for the various software elements.
Note - Agent,Server are all parts of Update Channel!
The following information is added to the .uce.rc
file.
A copy of the original .uce.re
file to the .uce.rc.SunGDD
file to enable debugging.
The following lines are added:
( all ) ( log.__file.default_logger-level, "DETAILED" { "SEVERE","ERROR","WARNING","INFO","DEBUG","FINE","DETAILED" } );
( all ) ( log.__file.uce_agent_app-level, "DETAILED" { "SEVERE","ERROR","WARNING","INFO","DEBUG","FINE","DETAILED" } );
The agent is restarted automatically.
To restore to previous debug level, copy back the .uce.rc.SunGDD
file if it exists.
The following information is added to the .uce.rc
file.
A copy of the original .uce.re
file to the .uce.rc.SunGDD
file to enable debugging.
The following lines are added:
( all ) ( log.__file.uce_server_cgi-level, "DETAILED" { "SEVERE","ERROR","WARNING","INFO","DEBUG","FINE","DETAILED" } );
( all ) ( log.__file.uce_server_cgi_publisher-level, "DETAILED" { "SEVERE","ERROR","WARNING","INFO","DEBUG","FINE","DETAILED" } );
( all ) ( log.__file.uce_server_scheduler-level, "DETAILED" { "SEVERE","ERROR","WARNING","INFO","DEBUG","FINE","DETAILED" } );
( all ) ( log.__file.default_logger-level, "DETAILED" { "SEVERE","ERROR","WARNING","INFO","DEBUG","FINE","DETAILED" } );
( all ) ( invisible.debug.__group1.debug_mode, true );
The server is restarted automatically.
The following debugging is enabled.
All appropriate instances of cacao (satellite-default, proxy, agent) are fully enabled for debug logs. These instances have increased logging file size and increase count of rolling log files. Backups of original configs are kept in cacao.properties.SunGDD
files.
Graphical OC components are also fully debug enabled and their log
files increased in size. Backups of config files are kept in logging.properties.SunGDD
and log4j.xml.SunGDD
.
The appropriate Ops Center components are restarted automatically. This can be avoided by using -r option.
When you create a Service Request through the Sun Support Center, either online or by phone, provide the following information:
A clear problem description
Details of the state of the system, both before and after the problem started
Impact on end users
All recent software and hardware changes
Any actions already attempted
Whether the problem is reproducible; when reproducible, provide the detailed test case
Whether a preproduction or test environment is available
Name and location of the archive file containing the debug data
Upload your debug data archive file to the following locations:
Note - When opening a Service Request by phone with the Dispatch Team, provide a
summary of the problem, then send the details in a text file
named Description.txt
. Be sure to include Description.txt
in the archive along with the
rest of your debug data.
Use the following email aliases to report problems with this document and its associated scripts:
To provide feedback: mailto:users@sun-gdd-oc.kenai.com
To report problems: mailto:issues@sun-gdd-oc.kenai.com
For more information about Ops Center, go to the Ops Center Information Exchange.
To get tips and hints on Ops Center visit Ops Center blog.
Contributors to these blogs include members of the Ops Center Field Enablement team.
The goal of these blogs is to share information with customers who either have
already implemented or will implement these products in the future. Those blogs also
provide important information around training and other key enablement activities.