Quantcast
Channel: Dell TechCenter
Viewing all 302 articles
Browse latest View live

Dell OMSA 7.3 and DTK 4.3 for Ubuntu and Debian

$
0
0

DellUbuntu

Dell OpenManage System Administrator 7.3 for Ubuntu and Debian

Dell OpenManage System Administrator (OMSA) 7.3 for Ubuntu and Debian is now published. When we recently published OMSA 7.2, we switched to a new apt repository format to better work with both Ubuntu and Debian and to allow packages for multiple OS releases in the same repository. OMSA 7.3 continues that. Furthermore, OMSA 7.3 is now built on both Ubuntu 12.04 and Debian Wheezy to increase compatibility with Debian. All packages that are in the Ubuntu distribution but are not in Debian are rebuilt for Debian and included in OMSA Wheezy repository for convenience.

(Please note that OMSA 7.2 and above are not built for Ubuntu 10.04 and Debian Squeeze. The last OMSA release tested with Ubuntu 10.04 and Debian Squeeze is OMSA 7.1, which is also provided in the new apt repository for convenience.)

Additionally, OMSA's Integrated Tunnel Provider (srvadmin-itunnel) is now built for Ubuntu and Debian. This brings Ubuntu and Debian closer to parity with RHEL and SLES in terms of the System Administrator functionality in OMSA.

Dell Deployment Toolkit 4.3 for Ubuntu and Debian

Also included in this release is version 4.3 of the Dell Deployment Toolkit (DTK) 4.3 for Ubuntu and Debian. DTK is lighter-weight than OMSA and is meant to assist with system deployment. The packages are named syscfg, raidcfg, and dtk-scripts. syscfg is a tool to configure server BIOS, BMC/iDRAC settings, DTK state settings, and to do PCI device detection. raidcfg, as the name suggests, is a tool to configure RAID on Dell PowerEdge servers. dtk-scripts contains sample DTK scripts and tools to build a bootable Dell utility partition for DOS-based firmware updates.

Where to get it

More information on where to download these packages is at http://linux.dell.com/repo/community/ubuntu/.

Getting help

Please join us on the linux-poweredge@lists.us.dell.com mailing list for support and feedback. You can sign up at <https://lists.us.dell.com/mailman/listinfo/linux-poweredge>.


Auto Dedicated NIC feature in iDRAC7

$
0
0

By Kareem Fazal and Virender Sharma of the Dell iDRAC team 

The new 12th generation Dell PowerEdge servers offer Auto Dedicated NIC feature in iDRAC7 version 1.30.30 (links can be found on the idrac page in the Relase Summary section) which helps the customers to automatically configure the iDRAC7 network connection.

Many customers route iDRAC management traffic via the shared LOM to save on ports and limit cables.  Dell offers additional flexibility in this case via the Auto Dedicated NIC feature, as described in this paper.  Now, customers can connect a crash cart directly to the dedicated NIC port, and the DRAC will automatically switch from shared mode to dedicated mode, and then back again once the cable has been removed.

In iDRAC versions below 1.30.30, selection of iDRAC network connection could either be a dedicated NIC port or a shared LOM port. To use the dedicated port, it was necessary to change the setting via the iDRAC web interface or command line, as well as physically connect a cable to the server.

With latest iDRAC versions 1.30.30 and above, the Auto Dedicated NIC feature is available as an enhanced functionality and does not change the existing behavior of manual NIC selection. User intervention to change the NIC setting using the iDRAC7 web interface or a command line is not needed as Auto Dedicated NIC switches to the correct network automatically.

 Requirements:

  • Feature is offered on PowerEdge rack and tower servers only (not on blades)
  • iDRAC7 Enterprise license is required to enable the feature
  • For PowerEdge rack and tower servers 500 series and below (R520, R420, T420, R320, T320), add-in card is required to have the Dedicated NIC port.
    • If iDRAC7 Enterprise license is ordered at point of sale, then add-in card comes along with the server.
    • If iDRAC7 Enterprise license is ordered later than point of sale, then add-in card will need to be ordered.

This feature is disabled by default. It can be enabled using following interfaces:

  • iDRAC Web Interface
  • RACADM
  • WSMAN
  • HII

Following matrix describes the behavior when Auto Dedicated NIC is either on or off, plus the different NIC selections and their failover modes: 

Auto Dedicated NIC

 

NIC Selection = Dedicated

 

NIC Selection = Shared

 

 

 

Failover

No Failover

Failover

No Failover

On

 

Dedicated up

Not Possible

Dedicated

Dedicated

Dedicated

Dedicated Down

Not Possible

Dedicated

Selected or failover NIC

Selected NIC

Off

 

Dedicated up

Not Possible

Dedicated

Selected or failover NIC

Selected NIC

Dedicated Down

Not Possible

Dedicated

Selected or failover NIC

Selected NIC

By using this feature, customers have the flexibility to route server management traffic as needed—quickly and effortlessly. More information on the Auto Dedicated NIC can be found in this paper on Dell Tech Center. Additional information on iDRAC and Lifecycle Controller: Click here.

Autocomplete and Command Traversal Feature on iDRAC7 RACADM

$
0
0

Starting with the 1.30.30 firmware release of iDRAC7 in Q4 of 2012, Dell introduced two features which will enable you to run RACADM (Remote Access Controller Admin) commands without having to remember the exact syntax for complex commands. These features are Autocomplete and Command Traversal, which are supported in iDRAC7 FW RACADM (SSH, Telnet and Serial).

Autocomplete and command traversal are available with the iDRAC Enterprise and Express licenses only.  The basic management license does not support these new features & the user privileges with this license are the same as RACADM command’s supported in the admin shell.

RACADM Autocomplete

With Autocomplete, there is no need to memorize different RACADM commands supported or complicated syntax. With this feature, when you type few letters of the command, pressing  “TAB” will either autocomplete the command if there is only one command starting with the letters typed, or provide a list of all command options starting with the letters typed. You can also use this feature to complete different options supported by the command.

This feature is applicable under RACADM shell only. Once you login in to the firmware racadm, it will be in the “admin” shell. You first have to type the command “racadm” to change the shell to “racadm”.

 

 

Command Traversal

Command traversal is part of the Autocomplete feature which will allow you to traverse to a different racadm group and perform get and set operations. This feature will work only on racadm shell.

In the Racadm shell, typing “CD”, and pressing “TAB” will list your available groups. To traverse to one of the groups, type few letters of the group and type TAB. The group name will autocomplete and you can traverse to this group by pressing enter.

After traversing to a group, you can get a list of supported attributes by running the get command. You can also use the CD command to further traverse sub groups. To get current settings for a specific attribute, type the get command followed by an attribute name. To set an attribute you can run “set” followed by the attribute name and attribute value. All of these operations are also autocompleted  by pressing the “TAB” key.

To traverse one level back to the previous group from a subgroup type “cd ..“. To exit from racadm shell and move to admin prompt type “Exit”.

 

For more information on RACADM, refer the following resources:

 

I am receiving a Java warning message when launching Virtual Console or Virtual Media with DRAC

$
0
0

This blog post has been written by Dave Collier and Doug Iler from the Dell iDRAC team

Oracle recently released an update to Java7 that may impact any DRAC5, iDRAC6 or iDRAC7 Enterprise features that use Java, such as Virtual Console and Virtual Media.  Users who have upgraded their Oracle Java installation to 7u45 or higher (possibly by allowing auto-updates to occur) may receive an additional warning message when launching these features.

  

This message is a notification only. If this message is received, click "Run" to continue to launch the Virtual Console or Virtual Media viewer (optionally, with the “Do not show this again…” box checked).

 Alternately, restoring the previous Java version would also prevent the display of the message.  Please refer to the Oracle Java documentation on how to uninstall Java. After uninstalling, re-install Java 7 update 21 (or your preferred version).

 Dell is working to provide a release of iDRAC firmware to resolve this issue and make it compatible with this and other upcoming Java7 upgrades.


Learn more about iDRAC7 at http://www.delltechcenter.com/iDRAC

DELL Server DRAC Card Soft Reset With Racadmin

$
0
0

The following blog post is from Didier Van Hoye, a Technical Architect, Dell TechCenter Rockstar and avid blogger.

Sometimes a DRAC goes BOINK

Sometimes a DRAC (Dell Remote Access Card) can give you issues. Sometimes it’s some lingering process or another hiccup that causes this. You can try a reboot but that doesn’t always fix the issue. You can go into the BIOS and cancel any running System Services. A “confused” DRAC card can also be fixed by shutting down the server and cutting power for 5 to 10 minutes. That’s good to know as a last resort but not very feasible a lot of times, bar a maintenance window when you’re on premise.

You can also try to do a local or a remote reset of the DRAC card via OpenManage  (OMSA), racadmin. See RACADM Command Line Interface for DRAC for more information on how and when to use this tool. The racadmin can be used for a lot of remote configuration and administration and one of those is a “soft reset” or basically a powercycle, aka reboot, of the drac card itself. Don’t worry your server stays up Smile.

Local: racadmin racreset soft

Remote: racadm -r <ip address> -u <username> -p <password> racreset soft

Real life example

I was doing routine maintenance on 4 Hyper-V clusters and as part of that DUPs (Dell update packages) were being deployed to upgrade some firmware. This can be automated nicely via Cluster Aware Updating and the logging option will help you pin point the issue. See http://workinghardinit.wordpress.com/2013/01/09/logging-cluster-aware-updating-hotfix-plug-in-installations-to-a-file-share/ for more information on this.

Just like we found that the DRAC upgrade was not succeeding on two nodes.

One it was due to the DUP not being able to access the Virtual USB Device

Software application name: iDRAC6
   Package version: 1.95
   Installed version: 1.92

Executing update…

Device does not impact TPM measurements.

Device: iDRAC6, Application: iDRAC6
  Failed to access Virtual USB Device

==================> Update Result <==================

Update was not applied

================================================

Exit code = 1 (Failure)

and the other was because there was some other lingering DRAC process.

 iDRAC is currently unable to process this request because of another task.
  Please attempt one or more of the following steps to cancel the pending iDRAC task:
  1) Wait 30 minutes and retry your request.
  2) Reboot the system; Press F10; select ‘Exit and Reboot’ from Unified Server Configurator, and retry your request.
  3) Reboot the system; Press Ctrl-E; select ‘System Services’. Then change ‘Cancel System Services’ to YES, which will close the pending task;
      Then press Enter at the warning message. Press ESC twice and select ‘Save Changes and Exit’ and retry your request.

==================> Update Result<==================

Update was not applied

================================================
Exit code = 1 (Failure)

They give some nice suggestions but the racreset is another nice one to have I your toolkit. It’s fast and effective.

Run racadmin racreset soft

image

Wait for a couple of minutes and then run the DUP or the items in SUU that failed. With some luck this will succeed now.

image

Setting iDRAC OS Information with IPMI on Ubuntu Server

$
0
0


A few weeks ago the Dell Linux Engineering team published a TechCenter article on how to set and retrieve Linux operating system information in iDRAC using the latest ipmitool utility in Fedora 18 and later releases.

Kent Baxley, Canonical Field Engineer, has ported this functionality to Ubuntu Server 12.04 LTS (and higher) so that customers running Ubuntu Server can take advantage of this useful systems management feature.

To read the easy-to-follow set of instructions on Ubuntu Server, click here.

Supporting 64-bit Dell Update Package (DUP) Using catalog

$
0
0

This blog post has been written by Vinod PS and Rohitkumar Arehalli

Note: To use this feature, you must update the firmware of iDRAC to 1.51.51 and later, and Lifecycle Controller to 1.3.0.850 and later.

LifeCycle Controller now supports updating 64-bit DUP using catalog update. The latest catalog file carries both 32-bit and 64-bit DUPs. If both the 32-bit and 64-bit DUPs are available in a catalog, preference is given to 64-bit DUP for the firmware update. 32-bit DUP is used for firmware update only when 64-bit DUP is not available in a catalog.


Additional Information:

More information on iDRAC

 

Dell OMSA 7.4, DTK 4.4, and iSM 1.0 for Ubuntu and Debian

$
0
0

DellUbuntu

Dell OpenManage System Administrator 7.4 for Ubuntu and Debian

Dell OpenManage System Administrator (OMSA) 7.4 for Ubuntu and Debian is now published. OMSA 7.4 continues to be built on both Ubuntu 12.04 and Debian Wheezy to increase compatibility with Debian. All package dependencies that are in the Ubuntu distribution but are not in Debian are rebuilt for Debian and included in the OMSA Wheezy repository for convenience. One must use the Debian Wheezy packages and not the Ubuntu 12.04 packages because of libc differences between the two distribution releases.

(Please note that OMSA 7.2 and above are not built for Ubuntu 10.04 and Debian Squeeze. The last OMSA release tested with Ubuntu 10.04 and Debian Squeeze is OMSA 7.1, which is also provided in the new apt repository for convenience.)

Here is a list of the major changes made specifically for Ubuntu/Debian with this release:

  • Updated the debhelper version used for building to version 9.
  • This community-supported repository no longer bundles the Oracle Java runtime but instead utilizes the OpenJDK 7 JRE available in-distribution. This helps those running the OMSA web interface to get JRE security updates faster. (If you require OpenJDK 6 JRE on your server, I recommend that you run the OMSA web server in a chroot or other type of container.)
  • Fixed a buffer overflow in vmcli.
  • We're now providing debug packages for the Ubuntu build (not Debian for now).
  • Because of the limited use cases for srvadmin-itunnelprovider, srvadmin-standardagent's dependency on it has been changed from "Depends" to "Suggests".
  • Provided basic packaging of srvadmin-cm, though this one package has not been thoroughly tested as in-band DUPs are yet supported on Ubuntu or Debian.

Additionally, we have a new process in place for validating on Ubuntu with the help of Canonical. Our on-site engineers from Canonical tested this release on both Ubuntu 12.04 LTS and beta builds of Ubuntu 14.04 LTS. My thanks go especially to Kent Baxley of Canonical for his continued help in testing OMSA. :-)

Dell Deployment Toolkit 4.4 for Ubuntu and Debian

Also included in this release is version 4.4 of the Dell Deployment Toolkit (DTK) for Ubuntu and Debian. DTK is lighter-weight than OMSA and is meant to assist with system deployment. The packages are named syscfg, raidcfg, and dtk-scripts. syscfg is a tool to configure server BIOS, BMC/iDRAC settings, DTK state settings, and to do PCI device detection. raidcfg, as the name suggests, is a tool to configure RAID on Dell PowerEdge servers. dtk-scripts contains sample DTK scripts and tools to build a bootable Dell utility partition for DOS-based firmware updates. Here are some changes specifically made for this release:

  • Fixed a bug affecting the functionality of raidcfg missed during previous releases.
  • Bundled an Ubuntu-specific DTK script made and tested by our on-site engineer from Canonical.

Note that DTK is not meant to be used on a system with OMSA installed. In particular, raidcfg will not work properly when OMSA is installed.

Dell iDRAC Service Module (iSM) for Ubuntu and Debian

This release includes the iDRAC Service Module (iSM). More information about iSM can be found at: http://www.dell.com/support/home/us/en/19/product-support/product/dell-idrac-service-module-1.0/manuals?c=us

    Where to get it

    More information on where to download these packages is at http://linux.dell.com/repo/community/ubuntu/.

    Getting help

    Please join us on the linux-poweredge@lists.us.dell.com mailing list for support and feedback. You can sign up at <https://lists.us.dell.com/mailman/listinfo/linux-poweredge>.


    RHEL 7 with Dell iDRAC7 Virtual Console on 12G PowerEdge servers

    $
    0
    0

    This blog was written by Charles Rose, Linux Engineering

    When using virtual console with Dell iDRAC on PowerEdge 12G servers and RHEL 7, you could experience some problems with keyboard / mouse functionality. For instance, during OS install, you can view the installer's Language/Country selection screen, but be unable to make a selection. Similarly, post-install the keyboard / mouse might appear to work for a few seconds and then stop.

    There was a problem with auto suspend of USB devices that has been addressed in a newer iDRAC7 firmware release. Firmware versions 1.51.51 and later contain a fix for this behavior. With the newer firmware, the keyboard / mouse should work as expected and you should be able to perform a GUI install of RHEL 7.

    In-Band retrieval of Dell iDRAC IP Address on Dell PowerEdge Servers using Windows PowerShell

    $
    0
    0

    The integrated Dell™ Remote Access Controller with Lifecycle Controller helps to manage, monitor and deploy Dell Servers. iDRAC provides remote management capability. It provides an Out-of-band mechanism to monitor, update and troubleshoot the servers. The latest version of iDRAC is iDRAC7 which is present in the Dell™ 12th generation servers.

    IPMI is a standardized computer interface that is used for hardware management. IPMI supports both in-band and out-of-band management. The focus of this post is the in-band management of Dell PowerEdge servers using Dell iDRAC7 and Windows Server 2012R2.

    Microsoft Windows PowerShell includes CIM Cmdlets which make remote management operations easy.PowerShell 4.0, which is a part of Windows Server 2012 R2 /Windows 8.1, also supports the CIM Cmdlets that were first introduced in PowerShell 3.0. Integration of PowerShell in Windows Server 2012 / 2012 R2 along with  iDRAC7 provides a rich set of remote management capabilities.

    Consider a scenario where you want to retrieve the iDRAC IP Address. To manage the system remotely, you can retrieve the iDRAC IP address in-band from the OS easily. This script uses the IPMI functionality that comes native within the OS.

    Additional Resources:

    Using Microsoft Windows PowerShell CIM Cmdlets with Dell iDRAC

    For more related articles visit Managing Dell PowerEdge VRTX using Windows PowerShell

    Dell & Redfish, What You Need to Know

    $
    0
    0

    Author: Jon Hass

    As revealed in a joint press release yesterday, Dell is participating in a coalition including Emerson, Hewlett Packard, and Intel whose purpose is to create a new industry standard for the management of data center hardware. The initial Redfish specification, which specifically targets server management, will be publically available once published by an industry standards body such as Distributed Management Task Force (DMTF). But today, I want to talk a little about this standard and what it might mean for Dell customers.

    Dell has a long history of supporting industry standards, from IPMI, which was introduced in 1998, to more recent standards such as SMASH. Five years ago, Dell introduced its web services interface (WSMAN) and has since evolved it into one of the world’s most sophisticated and capable server management APIs.  Backed by this kind of experience, Dell is a critical partner in the Redfish project, and our participation continues this long legacy of supporting industry standards.

    As with previous industry standards, Dell’s support is a boon to our customers, allowing them to limit the number of management processes needed to manage a multitude of servers. Redfish is no different in this respect, but it brings much more to the table. With scale-out data centers becoming more and more common, a standard that can comprehend the vicissitudes of today’s complex environments is needed. 

    Leveraging existing web protocols such as JSON and HTTPS while embracing RESTful design principles and a light weight data model, Redfish is built to meet the challenges of today’s large scale data centers who primarily manage to the lowest common denominator: IPMI.Though IPMI has served the industry well, it was designed for an earlier era of computing, falling short in describing today’s complex, and more and more disaggregated, computer systems.
     
    Several computer security researchers, such as Dan Farmer, have pointed to vulnerabilities with many implementations of IPMI. For this reason, Redfish is designed from the ground up with security best practices in mind.

    Another advantage to Redfish is that it is opaque, meaning that, unlike IPMI, it does not prescribe the implementation to server vendors like Dell. Instead, it is limited to the API only. Furthermore, the protocol and data model can be revised independently, which will reduce the complexity of implementing any future revisions.

    So, what can Redfish do? Though Redfish will evolve, the initial specification defines a set of management capabilities similar to those available in IPMI:

    Retrieve Telemetry
    -          Basic server identification and asset information
    -          Health state
    -          Temperature sensors and fans
    -          Power consumption and thresholds

    Discovery
    -          Service endpoint (network-based discovery)
    -          System topology (rack, chassis, server, node)

    Basic I/O Infrastructure Data
    -          Host NIC MAC addresses for LOM devices
    -          Simple hard drive status / fault reporting

    Security
    -          Session-based leveraging HTTPS

    Common Management Actions
    -          Reboot / power cycle
    -          Change boot order
    -          Configure BMC network settings
    -          Manage user accounts

    Access and Notification
    -          Serial console access via SSH
    -          Alert / event notification
    -          Event log access

    Since the joint announcement of Redfish, several questions have been raised about what this means for the future of IPMI and Dell’s WSMAN interfaces. To be clear, Redfish, once it is broadly implemented, will be ideal for large heterogeneous data centers, but at the moment, it currently offers a fraction of the capability in Dell’s WSMAN interface. 

    For this reason, Dell still recommends WSMAN as its primary application programming interface and will continue to invest in it for the foreseeable future. Until Redfish is implemented in a large amount of industry server hardware, IPMI will still be a critical standard in the data center. Therefore, until this happens, Dell has no plans to drop support for IPMI in its server products.

    To learn more about Redfish, please visit www.redfishspecification.org.

    Whitepaper on HA clustering with RHEL 6.5 and PowerEdge VRTX

    Curious about VMware Virtual Volumes?

    $
    0
    0

    Written by David Glynn:

    For several years we’ve been talking VMware Virtual Volumes, or VVols for short. And we’ve been saying that it is coming, coming soon. Well we are still saying that, and it is coming soon, I promise, but I can’t tell you when. However, what I can do for you today, is to let you get your hands dirty with VVols running on EqualLogic Storage. Pretty cool, right?

    For VMworld 2014, we worked with the VMware Hands-on-Lab team to create a lab from which VMware customers, not just Dell Storage customers could experience VVols first hand. Folks got to experience not only how day-to-day tasks, like snapshots and cloning are accelerated by the Virtual Volume integration with Dell EqualLogic and how the work flows involved remain similar (because as cool as tech can be, no one like it when things changes for no good reason), but they also got to perform a number of the VVols configuration steps, and then perform the same configuration tasks but with fewer steps and from a single interface using the Dell EqualLogic Virtual Storage Manager vSphere plugin.

    However, the fun didn’t stop at VMworld! Since then we’ve made a few performance tweaks to make things run faster, and now we are proud to announce that the lab is available and accessible around the clock at http://labs.hol.vmware.com/HOL/#lab/1513

    Oh, and to brag a little bit, we were the only one of VMware’s many Storage Partners to do this. Just another example of Dell and VMware working hand-in-hand.

    For those of you asking “What are VVols, and why should I care?”, let me summarize it for you into one word. Granularity. Literally, that is hundreds of blog posts and slide decks summarized into one word. Granularity. And what do I mean by that? VVols enable block storage to be VM-aware, and as your SAN, if I am aware of you and understand you, I can do things better with you. But better in what way?

    Granularity. There is that word again. Today when we use SAN snapshots to protect virtual machines, we do it at the datastore/volume/LUN level (datastores, volumes, and LUNs are all the same thing, it just depends on who in the datacenter you are talking to). This means that the virtual machine you want to protect, that business critical SQL database server, is bringing some baggage with it, which is that not so important but chatty file & print server. This works, but adds overhead and inefficiencies.

    With VVols, you pick the individual virtual machine you want to protect, and nothing else. An individual virtual machine can have a very particular protection schedule because of business need or SLA or simply because “Hey, we can do that? Cool!” Oh, and did I mention that you’ll do this directly from the vSphere Web Client? And that it will be faster? As for the file & print server, I’ll still be snapshotting that, but just once a month on the second Tuesday. Yup I created a snapshot schedule template called Patch Tuesday.

    One more thing, because I know some of you don’t find data protection exciting. Can I interest you in the health benefits of VVols? With VVols your virtual machine is now a series of volumes on the array, and as a VM-aware array we know which volumes go together to make up a particular virtual machine. This means that when you want a copy of a virtual machine, and VMware ask us do the work, all we have to do is clone a few volumes. And cloning volumes is old hat for an intelligent virtualized array architecture like EqualLogic.

    So how is this a health benefit? How long does it take to clone a virtual machine from template? Personally I’ve no idea, because I just go get another cup of coffee, and it is done when I get back. But with VVols it is done before I leave the cube. Sometimes I still go and get coffee.

    Still have a thirst for more information about VVols? VMware has already published over 400 sessions from last month’s VMworld 2014 in San Francisco. These can all be accessed from this page: http://www.vmworld.com/community/sessions/2014/

    I recommend the “Virtual Volumes Technical Deep Dive” presented by VMware and “VMware VVOL Technical Preview with Dell Storage” co-presented by Dell and VMware. Between these two sessions, you’ll have a firm understanding of what VVols will mean for you and your datacenter.

    Dell PowerEdge 13th Generation Servers certified with Ubuntu Server LTS 14.04

    $
    0
    0

    The Dell Linux Engineering team is pleased to announce the certification of Dell PowerEdge 13th Generation servers R730, R730xd, R630 & T630 with Ubuntu Server 14.04 LTS Edition. Customers looking to deploy Ubuntu Server can choose PowerEdge 13th Generation servers and 14.04 LTS with confidence knowing that Ubuntu 14.04 LTS is supported by Canonical for 5 years. 

    • For a quick glance at the Ubuntu Support Matrix for PowerEdge servers, click here.
    • For specific server certification details, visit Canonical’s Hardware Compatibility List.

    Deploying to the cloud

    Ubuntu 14.04 LTS includes Juju, a set of tools to easily deploy and orchestrate services to the cloud. Juju can be used together with MAAS (Metal-As-a-Service) to deploy services on bare metal. Both MAAS and Juju have been tested on all certified Dell PowerEdge 13th Generation servers.

    Support 

    Ubuntu support is available from Canonical through the Ubuntu Advantage program. Best-effort support from Dell is available with your Dell ProSupport contract. For questions and general discussion, contact our mailing list Linux-PowerEdge, we welcome your participation and feedback.

    USB3 Kernel Debugging with Dell PowerEdge 13G Servers

    $
    0
    0

    This blog was originally written by Thomas Cantwell, Deepak Kumar and Gobind Vijayakumar from DELL OS Engineering Team. 

    Introduction -

    Dell PowerEdge 13G servers (Dell PowerEdge 13G servers) are the first generation of Dell servers to have USB 3.0 ports across the entire portfolio.  This provides a significant improvement in data transfer speed over USB 2.0.  In addition, it offers an alternative method to debug Microsoft Windows OS, starting with Windows 8/Windows Server 2012 and later.

    Background –

    Microsoft Windows versions have had the ability to kernel debug using USB 2.0 since Windows 7/Windows Server 2008 R2, though there were some significant limitations to debugging over USB2.

    1)      The first port had to be used (with a few exceptions – see http://msdn.microsoft.com/en-us/library/windows/hardware/ff556869(v=vs.85).aspx ). 

    2)      Only a single port could be set for debugging.

    3)      You had to use a special hardware device on the USB connection. (http://www.semiconductorstore.com/cart/pc/viewPrd.asp?idproduct=12083).

    4)      It was not usable in all instances – in some cases, a reboot with the device attached would hang the system during BIOS POST.  The device would have to be removed to finish the reboot and could be reattached when the OS started booting.  This precluded debugging of the boot path.

    USB 3.0 was designed from the ground up, by both Intel and Microsoft, to support Windows OS debugging and has much higher throughput and is not limited to a single port for debugging.

    Hardware Support - 

    As previously stated, Dell PowerEdge 13G servers will now support not only USB 3.0 (also known as SuperSpeed USB), but also USB 3.0 kernel debugging. 

    BIOS settings -

    Enter the BIOS and enable USB 3.0 – it’s under the integrated devices category (By default, it is set to Disabled).

    • IMPORTANT!  ONLY enable USB 3.0 if the operating system has support!  Windows 8/Windows Server 2012 and later have this capability.  If you enable this and the OS does NOT have support, you will lose USB keyboard/mouse support when the OS boots.

    Ports –

    • USB 3.0 ports on Dell PowerEdge 13G servers can be used for Windows debugging (with USB 3.0 enabled and the proper OS support). 
    • Some systems, such as the Dell PowerEdge T630, also have a front USB 3.0 port.  The Dell PowerEdge R630/730/730XD have only rear USB 3.0 ports.  The Dell PowerEdge M630 blade also has one USB 3.0 front port.

    Driver/OS support -

    USB 3.0 drivers are native in Windows Server 2012 and Windows Server 2012 R2. There is no support for USB 3.0 debugging in any prior OS versions.

    USB3 Debugging Prerequisites –

    • A host system with xHCI(USB 3.0) Host Controller.  The USB 3.0 ports on the host system do NOT need USB 3.0 debug support – only the target system must have that.
    • A target system with xHCI(USB 3.0) Host Controller that supports debugging.
    • A USB 3.0 (A-A) crossover cable.You can get the cable from many vendors and we have provided one option below.
      • Note: The USB 3.0 specification states that pin 1 (VBUS), 2 (D-), and 3 (D+) are not connected. This means that the cable is NOT backwards compatible with USB 2.0 devices.
    • http://www.datapro.net/products/usb-3-0-super-speed-a-a-debugging-cable.html

    Steps to Setup Debugging Environment -

    1. Make sure USB 3.0 is enabled in the BIOS on both host and target.
      1. All Dell 13G servers support USB 3.0 debugging.
      2. Verify OS on host and target systems.
        1. OS must be Win8/WS2012 or Win 8.1/WS2012R2 on both.  For the debug host, a client OS is perfectly fine for debugging a server OS on the debug target. 
        2. It is strongly recommended to use Windows 8.1 and/or Windows Server 2012R2 on the host to ensure you can get the latest supported Windows debugging software – and you can then debug all current and older OS versions (that support USB 3.0 debugging) from the host.

                                                                   i.      http://msdn.microsoft.com/en-us/library/windows/hardware/ff551063(v=vs.85).aspx

    2.On the target system with the help of the USBView tool, locate the specific xHCI controller which supports USB debugging. This tool is the part of the windows debugging tools.  See the following link: http://msdn.microsoft.com/en-in/library/windows/hardware/ff560019(v=vs.85).aspx .

    To run this tool, you must also install .Net 3.5. It is not installed by default on either Windows 8/2012 or Windows 8.1/2012R2.

    1. On Dell PowerEdge 13G servers, there are several USB controllers – some will be designated “EHCI” – this is a USB 2.0 controller.  The controller for USB 3.0 will be designated “xHCI”, as we see below – this is the USB 3.0 controller.
      1. You can see the bus-device-function number of the specific xHCI controller which will be used for debugging below – this is important for proper setup of USB 3.0 debugging

    2.Next, you need to find the specific physical USB port you are going to use for debugging.

    1. For Dell servers the “SuperSpeed” logo is presented beside any USB3 ports

    • Connect any device to the port you wish to use for debugging
    • Observe changes in USBView (you may have the refresh the view to see the new device inserted in the port)
    • Verify the port does indeed show it is “Debug Capable” and has the “SS” on the port.    

     

     

    1. Operating System Setup – Target system
      1. Open elevated command prompt on target system and run following commands.
    • bcdedit /debug on
    • bcdedit /dbgsettings usb targetname:Dell_Debug (Any valid test name can be given here)
    • bcdedit /set "{dbgsettings}" busparams b.d.f(here provide bus, device and function number of the required xHCI Controller)
      • Ex : bcdedit /set "{dbgsettings}"busparams 0.20.0 . 
      • Busparams settings are important since Dell PowerEdge 13G systems have multiple USB controllers.  This ensures debugging is enabled only for the USB 3.0 (xHCI) controller.
      • Reboot server after making the changes above!

    2. Connect the host and target system by using the USB A-A crossover cable. Use the port identified at above.

    Steps to Start the Debugging Session :-

    Open compatible version of WinDbg as administrator (very important!). 

    1. Starting the debug session as administrator will ensure the USB debug driver loads.
    2. For USB 3.0 debugging, the OS on the host must be Windows 8/Server 2012 or later and match the “bitness” of the target OS - either 32-bit (x86) or 64-bit (x64). If you are debugging a 64-bit OS (all 2012+ Windows Server versions are 64-bit), then the host OS should be 64-bit as well.
    3. Open File->Kernel Debug->USB and provide the target name you set on the target (in our example,  targetname: Dell_Debug) . Click OK


     

    4.USB debug driver will be installed on host system. This can be checked in the Device Manager (see below) and for successful debugging there should not be a yellow bang on this driver. It is OK that it says “USB 2.0 Debug Connection Device” – this is also the USB 3.0 driver (works for both transports). This driver is installed the first time USB debugging is invoked. 

    Notes:

    1. If you are debugging for an extended time, disable the “selective suspend” capability for the USB Hub under the xHCI controller where USB debug cable is connected.

    In Device Manager:

    1. Choose the “View” menu
    2. Choose “View devices by connection”.  There are multiple USB hubs, so to get the correct hub, you need to find the hub that is under the xHCI controller.
    3. Navigate the specific xHCI controller.
    4. Under the xHCI controller, you will see  USB Hub. 
    5. Choose Properties for the USB Hub,and then Power Management and uncheck “Allow the computer to turn off this device to save power” (see picture below).

     

    Summary – Dell PowerEdge 13G servers with USB 3.0 provides a significant new method to debug modern Windows operating systems.

    • USB 3.0 is fast.
    • USB 3.0 debugging is simple to set up.
    • USB 3.0 debugging requires a minimal hardware investment (special cabling).
    • For Dell PowerEdge 13G blades (M630), this provides a new way to debug an individual blade.  Prior methods to debug Dell blades used the CMC (Chassis Management Controller), and routed serial output from the blade to the CMC – harder to configure and limited to serial port speeds.
    • A nice comparison of debug transport speeds – somewhat dated, but gives a good general idea on the speeds.

    TransportThroughput (KB/s)Faster than Serial
    Serial Port100%
    Serial Over Named Pipe50500%
    USB2 EHCI1501500%
    USB2 on the Go200020000%
    KDNET240024000%
    USB3500050000%
    139417000170000%

    Another Avenue To Avoiding Widespread Vulnerabilities: Small-Footprint Wyse ThinOS-Based Thin Clients

    $
    0
    0

    Aside from its dramatic name, two things stood out about the most recent widespread computing vulnerability known as Shellshock. One was the possibility that the ability to use Bash commands in UNIX or Linux to take control of an endpoint may have been undiscovered for as long as two decades. The other was simply the scope of the vulnerability, which potentially impacted hundreds of millions of users globally, given its inclusion at the core of two of the most widely used programming languages.

     

    The open source community and several major software companies issued patches this week, but Shellshock-related vulnerabilities may persist since a percentage of IT departments may not patch their endpoints while still others may not address the issue with the urgency it requires. Vulnerabilities may also persist because hastily-distributed updates to urgent vulnerabilities are not always entirely comprehensive.

     

    As a result, in addition to exploring security options from Dell Data Protection Solutions, one option your IT team may not have considered is investing in a well-provisioned cloud client-computing solution that leverages Wyse ThinOS, the virus-immune firmware base running on our Wyse thin and zero client devices.

     

    The Benefits of ThinOS

     

    Given its inherent architecture, ThinOS provides an important layer of virus and malware protection at the edge. While most traditional security remediations, such as corporate packet sniffers, firewalls, and anti-virus protection suites, can identify and eliminate malware on your datacenter servers and in your users’ virtual machines, ThinOS can keep your users’ physical endpoints virus free, given its zero attack surface.

     

    Bash, a common feature in UNIX and Linux environments, allows software developers and IT managers to run operating system commands simultaneously or within operational commands. However, it was only recently discovered that Bash commands can create vulnerabilities on endpoints or in Web servers running UNIX or Linux-based code. Those vulnerabilities have been nicknamed Shellshock, most likely because they leverage the Bash shell. Although the risk arising from Shellshock to Dell cloud client-computing products is low, we are actively working to eliminate any vulnerability.

     

    Cloud Client Manager servers have already been updated with available security patches to eliminate this threat, and we will continue to monitor new patches and apply them proactively on our Cloud Client Manager servers as they become available. We are working with our partners SUSE and Canonical, who have issued patches to address Shellshock vulnerability in SUSE Linux 11 SP3 and for Ubuntu respectively. We are now validating these patches on our Linux operating system firmware images and Linux-based thin clients.

    Fewer Malware Vulnerabilities

     

    Enterprise IT departments that have migrated users to Wyse thin client and zero client endpoints running our proprietary ThinOS can expect far fewer malware remediation issues. Command line vulnerabilities are generally mitigated due to the closed nature of the operating system, and because ThinOS excludes Bash software by design. Perhaps most importantly, Wyse thin client and zero client devices running ThinOS maintain a smallOS footprint of less than 15MB that is not stored locally.

     

    This absence of a local hard drive presents a smaller attack surface from which outside code might launch malware attacks. Wyse architecture and our ThinOS design will help your IT team avoid future vulnerabilities that might take advantage of Shellshock or SSL-based vulnerabilities such as “Heartbleed,” which had the potential to impact as many as two-thirds of all Web pages last April. Because Wyse ThinOS is not susceptible to Heartbleed or Shellshock vulnerabilities, it will likely not be susceptible to similar exploits in the future due to the security advantages of the ThinOS architecture. This means your IT managers and users will have fewer worries running ThinOS. Contact your Dell representative for more information.

    Enabling USB 3.0 on Dell PowerEdge 13G Servers with Microsoft Windows Server Operating Systems

    $
    0
    0

    This blog was originally written by Gobind Vijayakumar and Perumal Raja from DELL OS Engineering Team. 

    With Dell PowerEdge 13G servers, we have provided the option of enabling support for USB 3.0 for your operating system. This blog focuses on the enabling this feature with Microsoft Windows Server Operating Systems. 

    Firstly, USB 3.0 is disabled by Default in System BIOS; follow the steps below to enable USB 3.0 on your server.

    1. During POST, press “F2” to enter System Setup. Then Select “System BIOS”
    2. From the list of option - Select “Integrated Devices” option.
    3. Then you can Enable/Disable USB 3.0 using the USB3.0 setting as below.

    IMPORTANT!  ONLY enable USB 3.0 if the operating system has support! Or you may lose basic functionality like Keyboard/Mouse.

    USB 3.0 driver support in Microsoft Windows Server OS

    USB 3.0 drivers are native in Windows Server 2012 R2 and Windows Server 2012 and don’t require any additional configuration. You can go into the BIOS to enable/disable the USB 3.0 feature. Windows Server 2012 and Windows Server 2012 R2 should work fine without any issues.

    Windows Server 2008 R2 SP1 doesn’t have native (in-box) driver support for USB 3.0 which may create issues during Windows Server 2008 R2 SP1 installation with USB 3.0 enabled. You can follow one of the below methods to install Windows Server 2008 R2 SP1 along with USB 3.0 driver to make use of this feature.

    For Windows Server 2008 R2 SP1 here is the link for USB3 drivers. (alternatively, they will be listed on support.dell.com on the particular driver download page for the server)

    Method #1 (using Dell LifeCycle Controller)

    1. Make sure USB 3.0 is disabled before this installation
    2. Boot the system and press F10 at POST to enter LifeCycle Controller
    3. Connect your DVD ROM with Windows Server 2008 R2 SP1 media
    4. Select “Deploy OS” option under OS Deployment.

     

    5.On the next screen, you can create/Initialize the RAID if not already configured else select “GO Directly to OS deployment” option and click NEXT.

    6. On the Next screen, select the Boot mode and Operating System as “Windows Server 2008 R2 SP1”. Then click NEXT

    Note: Secure Boot is only supported with Windows Server 2012 or later.

    7.Once selected, Dell Lifecycle controller pulls in required drivers which include the USB 3.0 driver for OS deployment.

    8. After this stage, please select the mode of installation required and complete the OS installation from the media.

    9. Enable USB 3.0 in BIOS and OS boots up with required drivers installed.

    Method #2

    1. Disable USB 3.0 from the server BIOS settings using the steps given above.
    2. Create a folder with name “$WinPEDriver$” in a USB drive and copy the USB 3.0 drivers into the folder from the above link.
    3. Plug the USB drive into one of the ports and start the Windows server 2008 R2 SP1 installation.
    4. USB 3.0 drivers will be automatically loaded from the USB drive during the OS installation.
    5. After OS installation completes, reboot the server and enable USB 3.0 from the server BIOS settings.
    6. Server boots into the OS with required USB 3.0 driver installed.

    Method #3 (Post-install)

    1. Download the USB 3.0 driver from the above link provided and extract the files.
    2. Disable USB 3.0 from the server BIOS settings and install Windows server 2008 R2 SP1 OS.
    3. After completing the OS installation, boot into the OS.
    4. Open a “Command Console” and type the below command
        1. Pnputil.exe –a <Path to inf file of the USB 3.0 driver> 

                                 Note: Don’t use “-I” switch in the above command.

            5.Reboot the server and enable USB 3.0 from the server BIOS settings.

            6.Server boots into the OS with required USB 3.0 driver installed.

    References/Additional information

    1)      Microsoft Support for USB 3.0 - http://msdn.microsoft.com/en-us/library/windows/desktop/hh848067(v=vs.85).aspx

    2)      Dell LifeCycle controller - http://en.community.dell.com/techcenter/systems-management/w/wiki/lifecycle-controller/

    3)      USB 3.0 Kernel Debugging - http://en.community.dell.com/techcenter/b/techcenter/archive/2014/09/30/usb3-kernel-debugging-with-dell-poweredge-13g-servers

    Keynotes, Sessions, and BOGO – OH MY! Dell World & Dell World User Forum Updates Inside!

    $
    0
    0

    BOGO

    The Dell World User Forum (#DWUF) “Buy 1 Get 1” (BOGO) offer has been extended! Take advantage of this amazing offer to bring a colleague for free. Plus, don’t forget that your Dell World User Forum pass will gain access to the Dell World (#DellWorld) main event, keynotes and sessions.

    Dell World keynotes

    Dell World opening keynote: The opening session of Dell World 2014 will set the stage for Dell’s role in the exciting new reality of how the next generation of technology solutions will transform lives, businesses and economies. Read more >

    • Michael Dell, Chairman and Chief Executive Officer, Dell
    • Tom Reilly, Chief Executive Officer, Cloudera
    • Shyam Sankar, President, Palantir
    • Michael Chui, Partner, McKinsey Global Institute

    Afternoon keynote: This lively session, featuring Erik Brynjolfsson and Andrew McAfee, will be moderated by reddit co-founder, investor and entrepreneur Alexis Ohanian. Topics will include the impact of personal technology, the near-boundless access to information and how they enrich our lives. Read more >

    • Alexis Ohanian, Entrepreneur and investor, Co-founder of the social news site reddit
    • Andrew McAfee, Principal Research Scientist at MIT
    • Erik Brynjolfsson, Director, MIT Initiative on the Digital Economy; Professor, MIT Sloan School, Chairman, Sloan Management Review; Research Associate, National Bureau of Economic Research

    Dell World closing keynote: Michael Dell will be joined by disruption leaders — people inspiring a new way of thinking and leading cultural change. Read more >

    • Michael Dell, Chairman and Chief Executive Officer, Dell
    • Dr. Peter H. Diamandis, Chairman and CEO, X PRIZE Foundation

    Dell World User Forum Sessions

    Over 160 sessions spanning from technical in-depth sessions & hands-on-labs, to solution overview content that will fill your schedule and your mind with knowledge to take back to the office and positively impact your business!

    View all of the User Forum sessions and build your agenda today!

    Dell World Sessions

    More than 50 amazing sessions on Cloud, Security, Mobility and Big Data

    Check out our full lineup of more than 50 remarkable sessions that will allow you to experience our innovative hardware, software and services firsthand, through our customers' stories and their successful use cases. Discover strategies to simplify complex IT challenges, drive out inefficiency and — most important — ignite your entrepreneurial spirit so you can help your organization seize the opportunities created by today’s changing world. Complete session list >

    Need More? Here is your whip cream and cherry to add to the banana split of unlimited flavors happening at Dell World!

    Don’t forget, we are extending the chance to bring a colleague for free. Take advantage of the Buy One Get One (BOGO) registration offer. Register Today!

    Dell World User Forum – Three Seasoned Veterans Tell You Why It’s Worth It

    $
    0
    0

    Still deciding whether to spring for Dell World User Forum 2014? It’s coming up November 4-7 in Austin, and we’re doing everything we can to make it easy for you to attend. I want to point out a few highlights of what you can expect, especially as a KACE customer, then I’ll let a few DWUF veterans tell you about their ROI.

    BOGO and a free pass to Dell World

    User Forum is a lot more than panels and exhibits. User Forum brings you together with the Dell experts – engineers, architects, product managers – who build and support the products you work with day in and day out. It’s your chance to come in with a five-pound bag of questions and get them answered face to face, whether in a lab, at the Geek Bar or in a hallway.

    User Forum features hands-on labs in which you can finally sit down for that hour you’ve been promising yourself and dive deep into Dell products like KACE appliances. As soon as you get back to the office – and sometimes even before then – you’ll start to see a return on your investment in productivity.

    And speaking of investment, we have two financial incentives for you:

    • Your User Forum pass includes a pass to the Dell World main track event as well. Catch keynotes and presentations by Peter Diamandis of the X PRIZE Foundation, Erik Brynjolfsson of MIT and Michael Dell of – well, you know which company he works for.
    • We’re running a BOGO – a Buy-One-Get-One offer so you can share a pass with a colleague at no additional charge.

    If you or your organization is a current Dell customer, User Forum is the place for you.

    Sessions

    Here are some of the most popular KACE sessions and labs to look for:

    • K1000 Advanced Topics. Our engineers will help you understand what's under the covers of your K1000. You’ll take away a deeper understanding of how best to use this systems management platform in your environment.
    • Software Packaging/Scripting. We’ll talk about packaging, with real-world examples of tough deployments.
    • Software Distribution. We’ll go beyond the basics to some unconventional wisdom around deploying software, including large installers, complex installers and repackaging.
    • Patching: Getting Started, and Going Beyond Basics. Learn how to patch your environment with the K1000, then design a sustainable patching system with integrated automation and reporting.
    • Troubleshooting the K1000. Understanding how to debug is a skill all admins should hone regularly.

    We’ll cover these topics and more in breakout sessions, self-paced labs and hands-on labs led by instructors.

    3 veterans weigh in on Dell World User Forum

    But you don’t have to take my word for it. I asked a few real-world system administrators why they think User Forum is worth it. Here are some of their answers:

    • Ron Falkoff, System Analyst, Mary Institute and Saint Louis Country Day School (Missouri)

    “User Forum was productive for us because it accelerated our use of the KACE appliances. This is when we can raise specific issues we are having with others during birds-of-a-feather, or just with our peers in other industries having the same experiences.

    “On the way home one year, I implemented Smart Labels from San Francisco Airport, they populated by the time I got to O’Hare, and I began using them when I got to the St. Louis Airport. Another year, I fixed patching remotely on my appliance while talking to an expert at the Geek Bar.

    “I would tell people to not miss the product feedback, and to go to one session outside their comfort zone.”

    • Stacy Crotser, Computer Lab Administrator, Sam M. Walton College of Business, University of Arkansas

    “I got some great stuff out of the instructor-led labs. Getting to watch a presentation, then jumping right in to try it myself was fantastic! I learned lots of techniques and implemented them when I got home. I have referred time and again to the USB key with all the instructor-led presentations and training sessions. That USB key was the single best thing I have EVER gotten from a conference, and I got value out of it all year long.

    “If you are a KACE administrator, then the User Forum is a MUST! There was more information jam-packed into the conference than you could pick up otherwise in a whole year.”

    • Deedra Pearce, Director of Information Systems, Green Clinic Surgical Hospital (Louisiana)

    “Before I attended User Forum, I didn't interact much with our KBOXes; I did what I needed to do, then got right out, so I didn't realize all the capabilities they had. At User Forum it was so nice to see all the software integration, especially with all our other appliances. As soon as we returned, we got involved in the 6.0 update and couldn't wait to use the new user-friendly dashboard we’d learned about.

    “As IT director I learned so much at User Forum. I’ve been able to help make our CIO’s daily job so much easier in the past year as I've learned more about the system. User Forum has helped me develop a relationship with Dell, and especially with the KACE team.”

    Your turn

    So there you have them – three seasoned veterans telling you why Dell World User Forum 2014 is worth it. As KACE training lead, I keep in regular contact with peers at lots of companies using KACE. The networking and user base at User Forum grow steadily year upon year.

    • Have a look at the User Forum agenda and start picking out the labs and sessions you want to attend.
    • Double your internal expertise by grabbing your BOGO now. Register for User Forum and get a pass to all Dell World sessions.

    Overview of SELinux in RHEL7

    $
    0
    0

    This blog post is originally written by Srinivas Gowda G from Dell Linux Engineering group

    Dell recently announced support for RHEL7 on Dell PowerEdge Servers. As is the case with any SELinux (Security Enhanced Linux) enabled OS, your applications might hit SELinux surprises with RHEL7. Applications that ran perfectly well on RHEL6 might now complain about SELinux denials in RHEL7.  Most of the time when faced with SELinux issues the first temptation is to disable SELinux or switch to Permissive mode. Not the best thing to do!!! .

    In this article I intend to give an overview of SELinux architecture, demonstrate usage of some of the SELinux utilities available as part of RHEL7 and a few pointers on how best to deal with SELinux denials. SELinux provides an excellent Access Control Mechanism built into Linux. It was originally developed by US National Security Agency but is now part of various Linux distributions including RHEL7 providing enhanced security to Linux operating systems.

    Most Linux users are familiar with DAC (Discretionary Access Control),  the various degrees of permission levels that can be assigned to files in Linux as shown in this quick example:

    $ ls -l foo 

      -r-xr-xr-x  1 test testgroup   112 Oct  3  2013 foo.txt

    SELinux implements Mandatory Access Control (MAC) over the existing DAC.  In DAC the owner of the object specifies which subjects can access the object. Say for the above file foo, you want to give read and write permission to all the users and groups, and also in addition provide execution permission to user “test”. We can use chmod command to provide these permissions 

    $ chmod 766 foo

    $ ls -l foo

       -rwxrw-rw-  1 test testgroup   112 Oct  3  2014 foo.txt

    As we see in DAC, control of access to files is based on the discretion of the owner. Access is controlled based on Linux user and group IDs. 

    Configuring SELinux

    One of the important file in the configuration space is /etc/selinux file, here is a sample output of the default setting you might find in RHEL7

    # This file controls the state of SELinux on the system.

    # SELINUX= can take one of these three values:

    #     enforcing - SELinux security policy is enforced.

    #     permissive - SELinux prints warnings instead of enforcing.

    #     disabled - No SELinux policy is loaded.

    SELINUX=enforcing

    # SELINUXTYPE= can take one of these two values:

    #     targeted - Targeted processes are protected,

    #     minimum - Modification of targeted policy. Only selected processes are protected.

    #     mls - Multi Level Security protection.

    SELINUXTYPE=targeted 

    To make persistent changes of SELinux status or protection type (MLS/targeted) you must first edit the /etc/selinux configuration file and restart the operating system.

    By default RHEL7 enables SELINUX in enforcing mode and SELINUXTYPE is targeted. There are good numbers of utilities that can be used to play around SELinux; I’ll try to cover as much as possible in this blog and restrict myself with the default “targeted” policy.

    sestatus utility can be used to check the current status of SELinux on your RHEL7 system.

    $ sestatus

    SELinux status:                 enabled

    SELinuxfs mount:                /sys/fs/selinux

    SELinux root directory:         /etc/selinux

    Loaded policy name:             targeted

    Current mode:                   enforcing

    Mode from config file:          enforcing

    Policy MLS status:              enabled

    Policy deny_unknown status:     allowed

    Max kernel policy version:      28

    setenforce can be used to change the selinux mode in a non-persistent way.

    $ setenforce

      usage:  setenforce [ Enforcing | Permissive | 1 | 0 ]

    Similarly getenforce utility can be used to get the status of SELinux

    $ getenforce

    Enforcing

    Security Context

    Each file, users and process on a SELinux enabled system has a Security label called context associate with it. Here are some examples of how you can check the context of a file, process or users

     If I were to look at the SELinux context of /bin then you can use ls -dZ or you can prefer to use secon which is more descriptive.

    $ secon -f /bin/

         user: system_u

         role: object_r

         type: bin_t

         sensitivity: s0

         clearance: s0

         mls-range: s0

     In this example we have a file myFile for which SELinux provides a user (unconfined_u), a role (object_r), a type (user_home_t), and a level (s0) and it is this information that are used to make the access control decision in SELinux.

    $ ls -Z myFile

    -rw-rw-r--. user1 group1 unconfined_u:object_r:user_home_t:s0 file1

    Similarly for a process whose PID is 3161 you can check the SELinux context

    $ secon -p 3161  ( or use ps –eZ)

    user: unconfined_u

    role: unconfined_r

    type: unconfined_t

    sensitivity: s0

    clearance: s0:c0.c1023

    mls-range: s0-s0:c0.c1023

    sestatus and secon is part of policycoreutils package. To list SELinux confined users

    $ semanage user -l

                    Labeling   MLS/       MLS/                         

    SELinux User    Prefix     MCS Level  MCS Range                      SELinux Roles

    guest_u            user       s0              s0                                   guest_r

    root                  user       s0              s0-s0:c0.c1023               staff_r sysadm_r system_r unconfined_r

    staff_u              user       s0             s0-s0:c0.c1023                staff_r sysadm_r system_r unconfined_r

    sysadm_u         user       s0             s0-s0:c0.c1023                sysadm_r

    system_u         user       s0              s0-s0:c0.c1023                system_r unconfined_r

    unconfined_u   user       s0              s0-s0:c0.c1023                system_r unconfined_r

    user_u             user       s0              s0                                    user_r

    xguest_u          user      s0              s0                                     xguest_r

     semanage login -l gives the mapping between SElinux and linux users

    $ semanage login -l

    Login Name           SELinux User         MLS/MCS Range        Service

     __default__          unconfined_u         s0-s0:c0.c1023           *

     root                      unconfined_u         s0-s0:c0.c1023           *

     system_u              system_u              s0-s0:c0.c1023           *

    The default SELinux policy used in RHEL7 is "Targeted Policy". In targeted policy, type translates to a domain for a process and type for a file. An executable of type_t transitions to a domain domain_t as defined by Entry point permissions defined in policy tables. If you wish to change security labels then you can do so by using chcon or semanage utility. Changes made using chcon is not persistent and is lost when file systems are relabeled.  If you are dealing with SELinux related issues it is very important that you know how to set correct or appropriate context to files and directories. This will help you solve majority of SELinux denials.

    The following example tries to demonstrate these steps. I have a user “linux” whose home directory has a folder called Music already present

     # Directory Music has type audio_home_t

    $ ls -dZ

      drwxr-xr-x. linux linux unconfined_u:object_r:audio_home_t:s0 Music

     # Now I will create a dummy file file1 inside "Music" folder and check the context of this new file.

    $ touch Music/file1

    $ ls -dZ Music/

    -rw-rw-r--. linux linux unconfined_u:object_r:audio_home_t:s0 file1

     # Let’s use chcon to relabel security label type of directory "Music" from audio_home_t to admin_home_t

    $ chcon -t admin_home_t Music

    $ ls -dZ Music/

    drwxr-xr-x. linux linux unconfined_u:object_r:admin_home_t:s0 Music

     #Now that we have relabeled, lets create file2 inside "Music" 

    $ touch Music/file2

     #unlike file1 which had "audio_home_t" , we now see that the file2 is labeled as "admin_home_t" which is same as its parent directory

    $ ls -Z Music/

    -rw-rw-r--. linux linux unconfined_u:object_r:audio_home_t:s0 file1

    -rw-rw-r--. linux linux unconfined_u:object_r:admin_home_t:s0 file2

     restorecon is a SELinux utility that is mainly used to reset the security context of files or directories. It only modifies type portion of the security context of objects with preexisting labels.

    # Restore files to default SELinux security contexts. All the previous changes will be reverted.

    $ restorecon -R .   

    # We now can see that both Music and files inside this directory are relabeled to the default values.

    $ ls -dZ Music/

      drwxr-xr-x. linux linux unconfined_u:object_r:audio_home_t:s0 Music

    $ ls -Z Music/

      -rw-rw-r--. linux linux unconfined_u:object_r:audio_home_t:s0 file1

      -rw-rw-r--. linux linux unconfined_u:object_r:audio_home_t:s0 file2

    You can’t relabel files with arbitrary types.

    $ chcon -t dummy_type_t Music/file1

      chcon: failed to change context of Music/file1 to unconfined_u:object_r:dummy_type_t:s0 : Invalid argument

    However new types can be created by writing new policy rules

    If you are wondering how to validate the correctness of security context of objects then refer to matchpathcon utility. matchpathcon utility can be used to check the default security context associated with a file. matchpathcon queries the system policy to find out the correct context to files and reports the same.

    $ matchpathcon -V /home/linux/Music/file2

      /home/linux/Music/file1 has context unconfined_u:object_r:admin_home_t:s0, should be unconfined_u:object_r:audio_home_t:s0

    By now you must be wondering where these context rules are defined. In RHEL7 /etc/selinux/targeted/contexts is where the context of all the files is defined.

    As we have seen, changes made via chcon are not persistent. Non default changes made via chcon can be reverted using restorecon utility. semanage fcontext command is used to persistently set the context of files. Similar to the example above let’s try to change the context of a file, but this time let’s do a persistent change.

    # Lets create a folder called config under /root

    $ mkdir config

    # By default policy labels the directory as admin_home_t

    $ ls -Zd config/

      drwxr-xr-x. root root unconfined_u:object_r:admin_home_t:s0 config/

     

    # Lets relabel to config_home_t

    $ chcon -t config_home_t config/

    # As expected matchpatcon complains about the wrong context because default policy for config directory under /root must be admin_home_t

    $ matchpathcon -V config/

      config has context unconfined_u:object_r:config_home_t:s0, should be system_u:object_r:admin_home_t:s0

     

    # Using restorecon the context reverts back to the default target label as defined by the policies

    $ restorecon -v config/

    $ restorecon reset /root/config context unconfined_u:object_r:config_home_t:s0->unconfined_u:object_r:admin_home_t:s0

    # Now let’s write a new security context rule that will relabel "config" folder under root to config_home_t

    $ ls -Zd config/

      drwxr-xr-x. root root unconfined_u:object_r:admin_home_t:s0 config/

    $ semanage fcontext -a -t config_home_t  "/root/config"

    $ restorecon -R -v .

      restorecon reset /root/config context unconfined_u:object_r:admin_home_t:s0->unconfined_u:object_r:config_home_t:s0

    $ ls -Zd config/

      drwxr-xr-x. root root unconfined_u:object_r:config_home_t:s0 config/

    # Let`s create files under /root/config/ folder and check its context

    $ touch foo1

    $ touch foo2.config

    # Here foo1 and foo2.config seems to inherit the context as its parent directory

    $ ls -Z config/*

      -rw-r--r--. root root unconfined_u:object_r:config_home_t:s0 config/foo1

      -rw-r--r--. root root unconfined_u:object_r:config_home_t:s0 config/foo2.config

    # Let’s say I want only files inside /root/config  with .config extension to have security context as config_home_t

    $ semanage fcontext -a-a -t config_home_t  "/root/config(/.*\.config)"

    $ restorecon -R -v .

      restorecon reset /root/config/foo1 context unconfined_u:object_r:config_home_t:s0->unconfined_u:object_r:admin_home_t:s0

     

    $ ls -Z config/

      -rw-r--r--. root root unconfined_u:object_r:admin_home_t:s0 foo1

      -rw-r--r--. root root unconfined_u:object_r:config_home_t:s0 foo2.config

    You must be able to find all these new rules added under "file_contexts.local"

    $ cat /etc/selinux/targeted/contexts/files/file_contexts.local

     # This file is auto-generated by libsemanage

     # Do not edit directly.

     /root/config    system_u:object_r:config_home_t:s0

     /root/config(/.*\.config)    system_u:object_r:config_home_t:s0

    Now that we have managed to modify the security context of files let’s move on to the other most useful technique to solve SELinux denials. 

    Booleans

    SELinux provides a set of Booleans that helps change some of the SELinux policies.  semanage boolean -l  lists SELinux policies that can be changed at run time. Booleans are pretty useful since changes can be made easily without writing policies.

    getsebool -a  is used to list the current status of the these Booleans, setsebool can be used change the status of the available Booleans

     

    $ getsebool puppetmaster_use_db

      puppetmaster_use_db --> off

    $ setsebool -P puppetmaster_use_db on

    $ getsebool puppetmaster_use_db

      puppetmaster_use_db --> on

      -P flag make these settings to be persistent across reboots.

    If you are interested to know all the existing SELinux rules on your system then there is utility called sesearch which helps you find that(use --all to list all the rules). sesearch is part of setools-console package which contains other useful utilities such as findcon, seinfo etc.. 

    Access Vector Cache (AVC)

    SElinux policies are always evolving, new rules might get added while old ones may get refined or removed. This invariably means SELinux denying access to some operations of your application. These denials are usually referred as AVC denials.

    SELinux uses a cache called Access Vector Cache (AVC) that caches all the successful and failure access.  These AVC denials are usually logged in /var/log/audit/audit.log, they can also be logged in system logs but it depends on which demons are running (auditd, rsyslogd, setroubleshootd). In an X windows system if you have setroubleshootd and auditd running, then a warning message is displayed on the console. If you do not have a GUI console, then you can either use ausearch which is part of audit package or inspect the audit.log or system log using good old “grep”.

    SELinux provides an easy way to fix the denials using audit2allow. This is a pretty helpful utility that generates SELinux policy using the audit logs and get rid of the denials. But using this is something that you want to avoid if you are not sure about the policies are being added.  Most of the denials can be dealt by either changing the context of files in conflict or enabling the available Booleans. If these two changes don’t take care of your denials then care must be taken before you start adding new policies. 

    RHEL7 Update

    Some of the RHEL7 features such as systemd will trigger changes in security context of your applications in RHEL7, care must be taken by application owners when developing or porting applications to RHEL7. It’s important to understand the context transition of process and files. “sepolicy transition” command can be used to generate a process transition report. RHEL7 has addressed some of the File name transition issues of SELinux. If you are looking for more detailed information of SELinux on RHEL7 then refer to SELinux_Users_and_Administrators_Guide , and this document also provides an overview of new SELinux features in RHEL7.

    Viewing all 302 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>