PowerFlex: GET_INFO - Support Bundle Collection Utility
Resumen: get_info.sh - collect diagnostic information from a PowerFlex host and pack it into a support bundle
Instrucciones
get_info.sh [OPTIONS]
DESCRIPTION
get_info.sh is a diagnostic utility that collects debug information from a PowerFlex (formerly ScaleIO) host and archives it into a compressed bundle for analysis by support personnel.
The utility gathers data from multiple sources, including:
- PowerFlex component logs, configuration, and trace files
- MDM/SCLI query outputs and internal debug dumps
- PowerFlex component internal diagnostics
- Operating system configuration, logs, and runtime state
- Hardware inventory (storage controllers, network devices, NVMe, NVDIMM, etc.)
- Core dumps (existing and optionally generated on demand)
- Diagnostic data collector (diag_coll) statistics
The resulting bundle is a single compressed archive (tar/gz by default) that can be transferred to PowerFlex support for further analysis.
Only one instance of get_info.sh can run on a host at a given time. If there is not enough free space for its output, it will refuse to run (unless space checking is explicitly skipped).
OPTIONS
General Options
-
-a, --all
Collect all data. This is equivalent to specifying --mdm-repository, --collect-cores, --max-cores=2, --valgrind-cores, and --analyse-diag-coll.
-
-A, --analyse-diag-coll
Analyse diagnostic data collector (diag coll) data.
-
-b[COMPONENTS], --collect-cores[=COMPONENTS]
Collect existing core dumps for the space-separated list of user-land COMPONENTS. Default (when COMPONENTS is omitted): all user-land components.
Note: there must be no space between-band COMPONENTS. For the long form, separate with=Examples:
-b'mdm sds'--collect-cores='mdm sds' -
-d OUT_DIR, --output-dir=OUT_DIR
Store the resulting bundle under directory OUT_DIR. Default:
<WORK_DIR>/scaleio-getinfo(see --work-dir). -
-f, --skip-mdm-login
Skip the query for PowerFlex MDM login credentials. Useful when the user has already logged in manually.
-
-h, --help
Show the help message and exit. When combined with --tech, also display technician options.
-
-J, --xz
Use tar/xz format for the collected bundle instead of the default tar/gz. Ignored if the system's
tar(1)does not support--use-compress-programorxz(1)is not found. -
-k NUM, --max-cores=NUM
Collect up to NUM core files from each component. Default: all core files. Implies --collect-cores.
-
-l, --light
Generate a light bundle. Only the latest generation of numbered log files is collected, and component executables/libraries are not included when collecting cores. Use of this option reduces supportability and therefore its use is discouraged.
-
-m NUM, --max-traces=NUM
Collect up to NUM PowerFlex trace files from each component. Default: all files.
-
-N, --skip-space-check
Skip free disk space verification before data collection.
-
-P PATH, --collect-path=PATH
Collect the additional path PATH. Only absolute paths are accepted. Accepts wildcards; wildcards should be quoted. This option can be specified multiple times to collect multiple paths.
-
-q, --quiet, --silent
Suppress messages on standard output.
-
-r, --mdm-repository
Collect MDM repository files.
-
-s, --skip-sdbg
Skip collection of SDBG (diagnostic debugger) output.
-
-S, --pause-core-generation
Pause core generation of PowerFlex components during data collection. Original configuration is restored after collection completes.
-
-w WORK_DIR, --work-dir=WORK_DIR
Use directory WORK_DIR for temporary files. Default:
/tmp. -
-x FILE, --output-file=FILE
Store the collected bundle as file named FILE. The appropriate file name suffix (
.tgz,.zip, etc.) is added automatically. If FILE is-(dash), write the bundle to standard output (implies --quiet). When bundle is written to standard output, no bundle file is created on disk. Default:getInfoDump. -
-z, --zip
Use zip format for the collected bundle instead of the default tar/gz. Ignored if
zip(1)is not found on the system. -
--mdm-port=PORT
Connect to the MDM using port PORT for SCLI commands. Default: scli default behavior.
-
--overwrite-output-file
Overwrite the output file if it already exists. When an output file or directory is explicitly specified (via -x or -d), the default behavior is to refuse to overwrite; this option overrides that.
-
--tech
Include technician options in the help message output.
MDM Login Options
The following options are passed to the SCLI --login command. Their behaviour and default values are governed by SCLI.
-
-n, --use-nonsecure-communication
Connect to the MDM in non-secure mode.
-
-p PASSWORD, --password=PASSWORD
Use PASSWORD for PowerFlex MDM login. Default: SCLI default behavior.
-
-u USERNAME, --username=USERNAME
Use USERNAME for PowerFlex MDM login. Default: scli default behavior.
-
--ldap-authentication
Log in to PowerFlex MDM using LDAP-based authentication.
-
--management-system-ip=ADDRESS
Connect to SSO/M&O at ADDRESS for PowerFlex login. Default: scli default behavior.
-
--p12-password=PASSWORD
Encrypt the PowerFlex login PKCS#12 file using PASSWORD. Default: scli default behavior.
-
--p12-path=FILE
Store the PowerFlex login PKCS#12 file as FILE. Default: scli default behavior.
Technician Options
The following options are intended for use by support technicians and are shown in the help message only when --tech is specified.
-c[COMPONENTS], --generate-cores[=COMPONENTS]
Generate core files (via gcore(1)) for the running processes of the space-separated list of user-land COMPONENTS. Default: all user-land components. Implies --collect-executables. Requires gdb and gcore.
-c and COMPONENTS. For the long form, separate with =.
Examples:
-c'mdm sds'
--generate-cores='mdm sds'
-
-C CORE_FILE, --reference-core-file=CORE_FILE
Collect product logs and cores relative to the last modification time (mtime) of CORE_FILE, instead of the execution start time. Implies --collect-cores.
-
-E REF_TIME, --event-time=REF_TIME
Collect product logs and cores relative to REF_TIME, instead of the execution start time. Accepts any format understood by
date(1). Implies --collect-cores. - -g[COMPONENTS], --valgrind-cores[=COMPONENTS]
Collect Valgrind core dumps for the specified user-land COMPONENTS. Default: all user-land components. Implies --collect-executables.
Note: there must be no space between -b and COMPONENTS. For the long form, separate with =
Examples:
-g'mdm sds'
--valgrind-cores='mdm sds'
-
-t MIN, --minutes-before-event=MIN
Collect product logs and cores generated up to MIN minutes before the reference time. Default: 15.
-
-T MIN, --minutes-after-event=MIN
Collect product logs and cores generated up to MIN minutes after the reference time. Default: 5.
-
-X[COMPONENTS], --collect-executables[=COMPONENTS]
Collect component executables and their shared libraries for the specified user-land COMPONENTS. Default: all user-land components.
Note: there must be no space between
-band COMPONENTS. For the long form, separate with=Examples:
-X'mdm sds' --collect-executables='mdm sds' -
--keep-work-dir
Retain the generated temporary work directory after bundle creation (normally cleaned up automatically).
BUNDLE STRUCTURE
The output bundle is a single compressed archive.
- The bundle top-level directory is the hostname of the collected system.
- General host command outputs go into a
server/subdirectory.
File name is<command>+<arguments>+ suffix (.txtby default). Spaces replaced with_, non-alphanumeric characters stripped.
Example:server/ip_-s_addr.txt– output ofip -s addr - Product command outputs go into the component’s subdirectory.
mdm/forscli,sdc/fordrv_cfg, etc.
Command name (scli,drv_cfg, etc.) is stripped. The first meaningful argument becomes the filename. Files are assigned the relevant suffix,.txtby default.
Examples:mdm/query_cluster.txt– output ofscli --query_clustermdm/tgt_dump.txt– output ofscli --debug_action --tgt_dumpsdc/query_mdms.txt– output ofdrv_cfg --query_mdmssds/sdbg.txt– output of SDBGdumpallscreensfor SDS
- Product component files (as opposed to command outputs),
<component>/cfg,<component>/logs, etc.
Copied from the component’s directory with the prefix stripped.
Examples:mdm/cfg/conf.txt– copy of/opt/emc/scaleio/mdm/cfg/conf.txtsds/logs/trc.0– copy of/opt/emc/scaleio/sds/logs/trc.0
- Host file-system files are placed at their file-system path relative to bundle root.
Examples:etc/os-release– copy of/etc/os-releasevar/log/messages– copy of/var/log/messagesproc/cpuinfo– copy of/proc/cpuinfo
- Diagnostic collector (diag_coll) files are copied with the
/optprefix stripped, preserving internal structure.
Example:diag_coll/logs/sar.0– copy of/opt/diag_coll/logs/sar.0 - Hidden files (dot-prefixed) are "unhidden" by removing the leading dot.
- Utility execution log,
get_info_run.log, placed directly under the <hostname>/ root.
Bundle directory tree structure:
<hostname>/
|-- get_info_run.log Utility execution log
|-- server/ General command output directory
| |-- ip_-s_addr.txt
| |-- uptime.txt
| |-- uname_-a.txt
| |-- ps_-elF.txt
| |-- dmesg.txt
| +-- ... (one file per collected command)
|
|-- mdm/ PowerFlex component data (if installed)
| |-- cfg/ Configuration files (excl. PEM)
| |-- logs/ Trace and log files
| |-- rep/ Repository (if --mdm-repository)
| |-- query_all.txt SCLI query outputs
| |-- sdbg.txt SDBG screen dumps
| +-- ...
|-- sds/
|-- pds/
|-- dgwt/
|-- sdr/
|-- sdt/
|-- lia/
|-- sdc/
|-- gateway/
|
|-- diag_coll/ Diagnostic data collector (if installed)
| |-- logs/
| |-- cfg/
| +-- ...
|
|-- etc/ Host files
| |-- os-release
| |-- sysconfig/
| |-- network/
| +-- ...
|-- var/
| |-- log/
| | |-- messages
| | +-- ...
| +-- ...
|-- proc/
| |-- cpuinfo
| |-- meminfo
| +-- ...
|-- sys/
|-- ...
|
|-- scaleio-getinfo-extra/ Extra diagnostic data (if present)
+-- scaleio-getinfo-backup/ Backed-up configuration files (if any)
PRODUCT LOG AND CORE FILE FILTERING
The options described in this section control how product log files (also called trace files, e.g. trc.0, trc.1, exp.0) and core dump files are selected for inclusion in the collected bundle. They do so by defining a reference time, a time window around it, and count limits.
When no filtering options are specified, all product log files and (if core collection is enabled) all core dump files are collected. The filtering options progressively narrow this selection as described below.
Reference Time
A reference time can be set using either -E/--event-time or -C/--reference-core-file.
If neither --event-time nor --reference-core-file is given, no time window filtering is performed: the reference time defaults to the current time and is used only for proximity-based ordering when a count limit (-m or -k) is in effect (see Count Limits below).
If both -E and -C appear, the last one on the command line takes effect.
Time Window
When a reference time is set (using --event-time or --reference-core-file), a time window is established around it. The time window scope can be set using -t/--minutes-before-event and/or -T/--minutes-after-event, which default to 15 and 5 minutes, respectively. Only files whose content overlaps with this window are eligible for collection.
For example, -E "2020-03-20 14:30" -t 10 -T 3 collects files covering the period 14:20:00 through 14:33:00.
--minutes-before-event and --minutes-after-event are ignored when neither --event-time nor --reference-core-file is specified.
Count Limits
A file count limit can be set using -m/--max-traces and -k/--max-cores, for log files and core files, respectively. The limit is measured per component.
When more files than NUM fall within the time window (or are available, if no window is active), the NUM files closest to the reference time are collected.
When a count limit is used without --event-time or --reference-core-file, all files are candidates (no time window) and the NUM most recent files are selected.
Filtering Logic
File filtering applies the time window first, then the count limit:
- Establish candidates. All product log files and/or core dump files for a component are enumerated.
- Derive content period. Product log files content represent a period. The content period is considered to start at its predecessor's last modification time (mtime), or the UNIX epoch, when no predecessor exists; it ends at the file's own mtime. Core dump files represent a point in time, at the file's mtime.
- Apply time window (if
-Eor-Cspecified). Files whose content falls entirely outside the window are discarded from selection. For product log files, if no file falls within the window, the single file closest to the window is retained so that the bundle is never empty for a component. For core dump files, no such fallback applies. - Apply count limit (if
-mand/or-kspecified). Among the remaining files, at most NUM are selected, preferring those closest to the reference time. Files before and after the reference time compete equally for selection.
AUTHENTICATION
The utility attempts to log in to the local MDM if a primary MDM process is detected listening on the expected port (default: 6611).
MDM login options are passed to the SCLI --login command and are processed by it.
If login fails, the utility terminates with error.
When login is skipped, SCLI commands are still attempted (to support scenarios where the user has logged in manually beforehand). After 3 SCLI failures, a warning is displayed, and all further SCLI commands are skipped.
Login is skipped when:
- No primary MDM process is found on the local host.
- The --skip-mdm-login option is specified.
Login fails when:
- The MDM process owner is not in the authorized users list (default:
root) and secure login is enabled. - The SCLI
--logincommand returns an error (e.g., wrong credentials).
DISK SPACE
Disk space requirements for temporary files and the resulting bundle can vary considerably.
The utility attempts to minimize temporary space usage; it is limited to command outputs and copies of collected virtual file system (/proc and /sys) files.
To minimize disk space usage on the PowerFlex host, the bundle can be streamed from a remote host with --output-file=-. When streaming, the bundle file is written directly to standard output (stdout); it is not created on disk.
Before collecting data, the utility estimates the required disk space for both the temporary work directory and the output bundle.
If the estimated required space exceeds the available space on the relevant file system(s), the utility terminates with an error. This check can be bypassed with --skip-space-check.
The work directory and the output directory may reside on different file systems; each is checked independently.
The estimated space requirements are written to the utility's log file, get_info_run.log.
EXIT STATUS
| 0 | Successful completion |
| 1 | Error (invalid arguments, insufficient space, login failure, another instance already running, bundle generation failure, signal caught, etc.) |
FILES
<WORK_DIR>/get_info_run.log |
Execution log (also included in the bundle) |
<WORK_DIR>/scaleio-getinfo-tmp/ |
Temporary work directory (cleaned up on success) |
/tmp/scaleio-getinfo/getInfoDump.tgz |
Default output bundle location |
<WORK_DIR>/scaleio-getinfo-extra/
/tmp/scaleio-getinfo-extra/ |
Optional extra diagnostic data directories |
/tmp/scaleio-getinfo-backup/ |
Temporary backups of modified configuration files (automatically created) |
/opt/emc/scaleio/ |
PowerFlex installation directory |
ENVIRONMENT
Prerequisites
- The utility must be run as root (or a user with sufficient privileges to read component files, execute diagnostic commands, and access
/proc,/sys, etc.). - Standard utilities:
tar,gzip,stat,find,awk,sed,getopt(1)(enhanced),nice. - Optional:
zip(for--zip),xz(for--xz),gdb/gcore(for--generate-cores).
Concurrency
Only one instance of get_info.sh may run at a time. The utility checks for an existing running instance via pidof(1) and terminates if one is found.
Signal Handling
The utility traps INT, EXIT, and TERM signals during data collection. Upon receiving a signal, it:
- Restores any backed-up configuration files (e.g., core generation settings).
- Cleans up temporary directories.
- Exits with status 1.
The execution log is preserved and its path is printed to standard error.
EXAMPLES
Collect a standard support bundle:
get_info.sh
Stream a bundle over an SSH connection, without creating a bundle file on the remote PowerFlex host:
ssh <host> 'get_info.sh --output-file=-' > getInfoDump-<host>.tgz
Use a different work directory to avoid filling up /tmp:
get_info.sh --work-dir=/var/tmp
Include custom paths in the bundle:
get_info.sh --collect-path=/opt/custom/app/logs --collect-path='/var/log/app*'
Collect the latest core dump only for the SDS and MDM components:
get_info.sh --collect-cores='mdm sds' --max-cores=1
Collect data centered around a core file's modification time, with a custom time window:
get_info.sh --reference-core-file=/opt/emc/scaleio/sds/bin/core.1000 \
--minutes-before-event=10 \
--minutes-after-event=2