Monday, 10 November 2014

Thinking of Using Microsoft Windows NT or XP Embedded?

A report for OEMs and manufacturers who are considering the use of Microsoft Windows NT or XP Embedded in their manufacturing equipment and processes.

The use of Microsoft Windows NT and XP Embedded operating systems offers many attractive benefits while providing rich functionality that can be added to an embedded application. It is important to recognize that to effectively utilize these operating systems, some very important issues must be considered that will affect the way your application runs today and into the future.

Understanding the Use of an Application is Essential to Successful Implementation 

How is the system going to be used today and  in the Future?


Electronic devices with intelligence all use an operating system of some nature.  For this reason there is literally thousands of operating systems.  Many of these are considered embedded because they are highly specific and dedicated to one platform and/or application.  Microsoft Windows NT and XP are commonly used in commercial PCs, medical applications, factory automation, and many other applications around the world.  These feature rich and broadly supported operating systems have become the universal favorite for many applications.  The advantages of using these systems are appealing for many applications.  However, with their comparatively larger size (footprint) and overhead, some applications cannot effectively use these operating systems.

Microsoft offers Embedded versions that can overcome the size and overhead issues.  Chart A (Page 8) describes some of the advantages of using these embedded operating systems in an application.  With the use of these versions a solid understanding of the application and its future uses is essential to the successful implementation of the system.

The benefits and features associated with the Microsoft Embedded operating systems make the choice to investigate the use of this option in an application an easy one.  When considering the use of one of these operating systems there are some hidden issues and concerns that need to be understood.  The remainder of this paper is dedicated to presenting these issues and concerns so that the decision and implementation plans will be successful.  The intent is not to provide a step by step process for implementation but rather to provide a checklist of issues and concerns that can be utilized to make an educated and successful decision.


 "I need Embedded NT on a cost effective solid-state device so that the hard drive can be eliminated," is one of the most frequently requested options heard from Xycom Automation customers.  It becomes very obvious that many times the entire nature of this request is not completely understood.  Windows NT or XP Embedded in its full form can easily be loaded on a system.  To address the second part of this request (elimination of the hard drive), the customer needs to understand what reducing the size of the operating system means both to the application and to the supplier of the system.  This leads to an important question: "How is the system going to be used today and in the future?"  As much as many suppliers wish they could answer this for the customer, the use of the proposed system needs to be determined by the customer and/or the end user.

Understanding the Image Development Process

The first step in deciding to use these operating systems is the understanding of the image development process associated with the implementation of a reduced size image for the system.  To reduce the size of the image, features and/or services need to be eliminated.  Without a clear understanding of which devices and applications are going to be used with the system, a reliable and successful system cannot be developed.  Microsoft supplies a toolkit for customizing the embedded operating system image called Windows Embedded Studio.  Included in the Windows Embedded Studio toolkit are Target Designer, Component Designer, a Component Database and Manager, and Platform-specific tools.  An understanding of the process and each of these tools is crucial to the success of the decision to use one of these embedded operating systems.  

A development process overview, footprint configuration overview, and product datasheets are available at http://www.microsoft.com/windows/embedded/.  The steps in the development process as presented by Microsoft are as follows:
 
1.      Identify the hardware on your target device.
2.      Choose the features and functionality required in your run-time image.
3.      Identify the embedded system-specific features that need to be included in your target device.
4.      Include custom components.
5.      Build your run-time image.
6.      Deploy your run-time image.
 
Example issues and concerns associated with each of these steps are discussed individually in the following paragraphs and are not to be construed as instructions.

Hardware – The device supplier, unless supplying the entire system, cannot determine the additional devices that will be added to the system.  Items such as additional I/O device network connections, programmable controllers, vision systems, motion control systems, scanners, data acquisition boards, and many others can only be decided by the OEM and/or end user of the device.  When making a smaller footprint operating system, support for these may not be available if not included in the original image.  For example, Xycom can supply an industrial computer with a solid-state storage device and a reduced size operating system, but if the system has one of the previously mentioned hardware devices added, it might cause the system to function improperly.  For this reason it is important that the OEM and/or the end-user be involved in the development of the original run-time image.
 
Features and Functionality – Embedded operating systems offer a vast array of features to choose from, unlike full system installations that offer fewer choices of selectable features.  While a normal NT or XP installation allows the user to decide whether or not to install Internet ExplorerÔ, the home page and title bar can be set for the browser using Microsoft Target Designer. The OEM or end-user will be ultimately responsible for what applications will be added after receipt of the base system.  If an application that requires browser support is added and the browser was not included in the run-time image, then the application might not run properly.  This is another example of the importance of the OEM and/or end-user's involvement in the development of the original run-time image.

Embedded System-specific Features – In some cases, the embedded operating system is intended to run on a standard personal computer. However, many times embedded devices have different requirements, including no display capabilities or no writeable hard drive.  The use of smaller solid-state media to store the image is another consideration often included in an embedded application.  With reduced sized media, data may be unable to be stored for extended periods or swap files may be unable to be utilized.   The size of the media and how the system is intended to be used are key in making the correct image.  A failure due to insufficient storage space after implementation will not be an acceptable scenario.  
 
Custom Components – If the extensive list of available components supplied with these embedded operating systems need to be supplemented, it can be done during the development phase.  These embedded operating systems, unlike a normal desktop system, are intended to be very dedicated to specific tasks and not continually altered after implementation.  Adding components after the image is made may cause the system to improperly function if a feature/service is unavailable.  Third party vendor components, INF files, and other utilities can be directly added to the run-time image before deployment by using Microsoft Target and Component Designers.
 
Build the Run-time Image – Building the image for these embedded operating systems differs from building an image from source code.  By using Target Designer, the image is generated by reassembling the individual components to create the operating system.  With the use of Windows Embedded Studio, dependencies can be checked and resolved before building the run-time image.  Files and resources can be assembled, directory structures generated, files copied to the appropriate directories, and registry hives can all be accomplished before the image has been finalized.  This process may require multiple attempts if all devices, features, and services needed by the end-user have not been addressed completely.

Deployment of the Run-time Image – The image created on the development system will then need to be transferred to the target platform or device. Once the target device has been deployed, it may be difficult to add software and/or additional hardware without using Windows Embedded Studio. Examples of problems that can be associated with adding devices, software, or services are:
 
·        No CD-ROM support – How will software be installed?
·        No network services included – How will the unit be connected to the enterprise network in the future?
·        No user interface (display) capability – How will the end-user work with the system?
·        Did not load Internet Explorer – How will software be added that requires it?
 
Once the run-time image has been determined, many suppliers can provide a target device with the developed run-time image included.

Hardware Selection
Now that the development process is understood, selecting the hardware platform or target device is important.  The requirements for an application will most likely control the type of hardware to be selected.  Once the hardware platform is selected, all auxiliary devices and application software requirements need to be determined.

One consideration in selecting the hardware is the use of a hard drive. Solid-state, or CompactFlashÔ memory devices in some cases, are significantly more reliable than the traditional rotating media hard drive.  Applications that require significant levels of shock and vibration may require the hard drive to be eliminated and replaced with solid-state media.   If a hard drive can be eliminated, then the use of the Microsoft Embedded operating systems may prove to be very beneficial.  The single most important item that will affect the system is the use of swap files.  The standard NT and XP operating systems use swap files that can significantly increase the amount of storage required.  With standard operating systems, the size of the swap files can be reduced but not effectively disabled.  However, with Microsoft Windows NT or XP Embedded, this function can be disabled if desired.  Unlike hard drives, most solid-state media have a limited number of "writes" and "re-writes". The number is usually very large and corresponds to one segment of the media. CompactFlash, for example, has built-in software, which distributes each "write" so that one area is not overburdened.  A good understanding of the application and how it will be used is essential to successful implementation.

The decision to store data on solid-state media for future reference is a major area of concern.  256 MB of CompactFlash storage will be filled more quickly as compared to the capacity of a 20 GB hard drive.  The use of networks to store the data elsewhere, or the use of a hard drive only for storing data, are both options that may be available for the application.  To emphasize the importance of the storage device choice, when a solid-state media (hard drive) fills up, the system and application may terminate unexpectedly.  This is an unacceptable situation in today's manufacturing environment.

Procurement of the hardware is another concern.  Asking a hardware supplier for a system with Windows NT or XP Embedded on CompactFlash is not enough information for the embedded system.  For instance, Xycom could create images and ship product, but if additional hardware and/or software are added that is not fully supported by the image, then the hardware may not function properly.  On the other hand, Xycom could pre-install a customer-supplied image on the hardware platform of choice. Often times, there are several layers of suppliers, distributors, and/or manufacturers that may not entirely understand the application, decreasing the probability of successful implementation.  When the end-user plays an active role in determining the run-time image, the probability of successful implementation of hardware and software is significantly greater.

Application Software Selection
There is a multitude of applications to select from supported by Microsoft Windows NT and XP.  This is one of the major advantages to considering Microsoft Windows NT or XP Embedded.  Selecting the one(s) for the application is important.  Once all have been selected, it is mandatory to investigate and analyze the image to assure that all functions of the package(s) selected are supported.  Microsoft supplies tools in the Windows Embedded Studio toolkit to aid in this process.

It is recommended to include your application in the image so that every unit is set up exactly the same.  This is another example of how important the OEM and/or end-user is in the image generation process.  Hardware suppliers can pre-install the image as supplied by the customer, but asking the hardware supplier to load an Embedded operating system without pre-determining this information may result in a less satisfactory solution for both parties.

Component Selection
With the hardware platform and the application software selected any additional items need to be determined.  These include serial devices, expansion cards, separate controllers, external devices, and additional software that may be required.  These all need to be supported by the generated image.  The Windows Embedded Studio toolkit needs to be utilized to check for dependencies and support.

Any software, drivers, or files that are needed for support should be included in the image.  These support files can be pre-loaded, but without knowledge of any additional items, the image may prove to be unsuccessful.  A good example is when the creation of an original run-time image did not include parallel port services because a printer was not required. Later on, when a software application is added that requires a parallel port hardware key, the added software will not work.
 
 
The Future
Now that the current application has been determined, the question, "How will this device be used in the future?" needs to be answered.  Although the system may run the current application successfully, it may not run correctly when different components or software are added in the future. Understanding and including all devices and software for the application is important, so too are future additions to the application.  Many Windows NT or XP Embedded devices will be deployed as dedicated computers.  But in the future, these devices may be perceived as typical computers that allow software and other devices to be added relatively easily.  These new devices or software will need to be re-evaluated using the Windows Embedded Studio toolkit and a new run-time image downloaded to the platform.

Summary
Microsoft Windows NT or XP Embedded operating systems are very appealing choices for many applications.  With their many features and benefits, time to market can be significantly reduced over the development and introduction of a proprietary operating system.  The use of these Embedded operating systems requires more understanding of the application than the implementation of a more traditional commercial operating system. 

Many concerns and issues have been discussed throughout this paper and are summarized in Chart B (Page 9).  It is hoped that this chart can be utilized as an aid in a decision to implement an embedded operating system into an application successfully.  If all issues and concerns are thoroughly considered and the Microsoft Windows Embedded Studio toolkit utilized, then the probability of successful implementation will be greatly increased. 


Before deciding to use these Embedded operating systems, the issues and concerns presented here, as well as a complete understanding of the Microsoft Windows Embedded Studio toolkit and development process should be addressed.  Additional information is available about the embedded operating systems on the Microsoft web site at http://www.microsoft.com/windows/embedded/Guide to Developing with Embedded Windows and several other resources are available at http://www.bsquare.com.

Xycom Automation welcomes customers to work closely with us for integrated hardware with preloaded Windows NT or XP Embedded run-time images. Our combined efforts will pay off with successful implementation for today and tomorrow. 

Chart A: Features & Benefits of NT and XP Embedded
· Scalability
· The overall size of the operating system can be condensed by removing unneeded features and services
· Allows the use of smaller solid-state storage media to be utilized (eliminate hard drive and rotating media)
· Built-in networking and communication services
· Allows use of TCP/IP, DHCP, WinSock, RPC, RRAS, FTP, etc.
· Interoperability with existing PC
and server hardware
· Provides greater choices and flexibility when choosing your platform
· Win32 API support
· Provides consistent development environment
· Windows services support
· Allows greater manageability
· C2-level security
·  Supports applications that demand secure environments
· Systematic multiprocessing support
· One solution provides a system that can be used for simple and very demanding applications
· Reduces time to market
·  Powerful authoring tools allow easy integration into your platform
· Less time developing and supporting proprietary OS code
· Less time developing drivers, services, applications, and getting them to work
· Broad range of productive Windows-based development tools
· Many trained and experienced developers
· Multitude of off-the-shelf hardware and device drivers
· Large number of existing Win32 applications
· Microsoft BackOfficeÒ family applications
· Easy enterprise connectivity
· Allows easy integration of new opportunities with an existing IT infrastructure
· Devices that can be introduced and managed like other Windows-based systems
· Next generation devices can participate in enhanced management environments (examples: Microsoft Systems Management Server, HP OpenView, IBM Tivoli, CA Unicenter TNG, etc.)
Source: Microsoft Corporation
 
  
Chart B: Issues & Concerns for Successful Implementation
· Microsoft Windows NT or XP Embedded
· Do you have a complete understanding of your application today and in the future?
· Do you have the right software expertise to access?
· Do you have the Windows Embedded Studio toolkit?  If not, how can it be accessed?
· Implement an Application Software Package(s)
· Is all required functionality in the run-time image?
· Will any functionality be added in the future?  If so, is it supported in the image?
· Have you utilized the tools included in Windows Embedded Studio toolkit?
· Eliminate rotating media (hard drive)
· Can operating system image be reduced?
· Is all functionality included in the run-time image?
· What size solid-state device will be required?
· Are swap files being used?  If so, is the solid-state media correctly sized?
· Are you going to store data? If so, where?
· How many "writes" will be required of the solid-state device in the normal use of the application?
· Additional hardware added to the platform
· Is the hardware supported by your image?
· Are all required drivers in the run-time image?
· Will anything be added in the future?  If so, is it supported in the image?
· Have you utilized the tools included in Windows Embedded Studio toolkit?
· Additional software added to the platform
· Is the Software supported by your image?
· Will anything be added in the future?  If so, is it supported in the run-time image?
· Have you utilized the tools included in Windows Embedded Studio toolkit?
· Procurement of a hardware platform with Windows NT or XP Embedded pre-installed
· Has everything been considered and included in the creation of the image?
· Who will be responsible for the image?
· Have you utilized the tools included in Windows Embedded Studio toolkit?
· Additional software or hardware in the Future
· Does the original image need to support these?
· Who will create and verify a new image when items are added?
· Are you expecting your system to be a typical computer?  If so, maybe use of an embedded operating system is not the right choice.

Source:-http://www.automation.com/library/articles-white-papers/hmi-and-scada-software-technologies/thinking-of-using-microsoft-windows-nt-or-xp-embedded

The Future of Industrial Automation

Click for Jim Pinto Biography Since the turn of the century, the global recession has affected most businesses, including industrial automation. After four years of the new millennium, here are my views on the directions in which the automation industry is moving.

The rear-view mirror:-

Because of the relatively small production volumes and huge varieties of applications, industrial automation typically utilizes new technologies developed in other markets.

 Automation companies tend to customize products for specific applications and requirements. So the innovation comes from targeted applications, rather than any hot, new technology.

Over the past few decades, some innovations have indeed given industrial automation new surges of growth: The programmable logic controller (PLC) – developed by Dick Morley and others – was designed to replace relay-logic; it generated growth in applications where custom logic was difficult to implement and change. 

 The PLC was a lot more reliable than relay-contacts, and much easier to program and reprogram. Growth was rapid in automobile test-installations, which had to be re-programmed often for new car models. The PLC has had a long and productive life – some three decades – and (understandably) has now become a commodity.

At about the same time that the PLC was developed, another surge of innovation came through the use of computers for control systems. Mini-computers replaced large central mainframes in central control rooms, and gave rise to "distributed" control systems (DCS), pioneered by Honeywell with its TDC 2000. But, these were not really "distributed" because they were still relatively large clumps of computer hardware and cabinets filled with I/O connections.

The arrival of the PC brought low-cost PC-based hardware and software, which provided DCS functionality with significantly reduced cost and complexity. There was no fundamental technology innovation here—rather, these were innovative extensions of technology developed for other mass markets, modified and adapted for industrial automation requirements.

On the sensor side were indeed some significant innovations and developments which generated good growth for specific companies. With better specifications and good marketing, Rosemount's differential pressure flow-sensor quickly displaced lesser products. And there were a host of other smaller technology developments that caused pockets of growth for some companies. But few grew beyond a few hundred million dollars in annual revenue.

Automation software has had its day, and can't go much further. No "inflection point" here. In the future, software will embed within products and systems, with no major independent innovation on the horizon. The plethora of manufacturing software solutions and services will yield significant results, but all as part of other systems.

So, in general, innovation and technology can and will reestablish growth in industrial automation. But, there won't be any technology innovations that will generate the next Cisco or Apple or Microsoft.
We cannot figure out future trends merely by extending past trends; it’s like trying to drive by looking only at a rear-view mirror. The automation industry does NOT extrapolate to smaller and cheaper PLCs, DCSs, and supervisory control and data acquisition systems; those functions will simply be embedded in hardware and software. Instead, future growth will come from totally new directions.

New technology directions:-

Industrial automation can and will generate explosive growth with technology related to new inflection points: nanotechnology and nanoscale assembly systems; MEMS and nanotech sensors (tiny, low-power, low-cost sensors) which can measure everything and anything; and the pervasive Internet, machine to machine (M2M) networking.

Real-time systems will give way to complex adaptive systems and multi-processing. The future belongs to nanotech, wireless everything, and complex adaptive systems.

Major new software applications will be in wireless sensors and distributed peer-to-peer networks – tiny operating systems in wireless sensor nodes, and the software that allows nodes to communicate with each other as a larger complex adaptive system. That is the wave of the future.

The fully-automated factory:-

Automated factories and processes are too expensive to be rebuilt for every modification and design change – so they have to be highly configurable and flexible. To successfully reconfigure an entire production line or process requires direct access to most of its control elements – switches, valves, motors and drives – down to a fine level of detail.

The vision of fully automated factories has already existed for some time now: customers order online, with electronic transactions that negotiate batch size (in some cases as low as one), price, size and color; intelligent robots and sophisticated machines smoothly and rapidly fabricate a variety of customized products on demand.

The promise of remote-controlled automation is finally making headway in manufacturing settings and maintenance applications. The decades-old machine-based vision of automation – powerful super-robots without people to tend them – underestimated the importance of communications. But today, this is purely a matter of networked intelligence which is now well developed and widely available.

Communications support of a very high order is now available for automated processes: lots of sensors, very fast networks, quality diagnostic software and flexible interfaces – all with high levels of reliability and pervasive access to hierarchical diagnosis and error-correction advisories through centralized operations.

The large, centralized production plant is a thing of the past. The factory of the future will be small, movable (to where the resources are, and where the customers are). For example, there is really no need to transport raw materials long distances to a plant, for processing, and then transport the resulting product long distances to the consumer. In the old days, this was done because of the localized know-how and investments in equipment, technology and personnel. Today, those things are available globally.

Hard truths about globalization:-

The assumption has always been that the US and other industrialized nations will keep leading in knowledge-intensive industries while developing nations focus on lower skills and lower labor costs. That's now changed. The impact of the wholesale entry of 2.5 billion people (China and India) into the global economy will bring big new challenges and amazing opportunities.

Beyond just labor, many businesses (including major automation companies) are also outsourcing knowledge work such as design and engineering services. This trend has already become significant, causing joblessness not only for manufacturing labor, but also for traditionally high-paying engineering positions.

Innovation is the true source of value, and that is in danger of being dissipated – sacrificed to a short-term search for profit, the capitalistic quarterly profits syndrome. Countries like Japan and Germany will tend to benefit from their longer-term business perspectives. But, significant competition is coming from many rapidly developing countries with expanding technology prowess. So, marketing speed and business agility will be offsetting advantages.

The winning differences:-

In a global market, there are three keys that constitute the winning edge:
  • Proprietary products: developed quickly and inexpensively (and perhaps globally), with a continuous stream of upgrade and adaptation to maintain leadership.
  • High-value-added products: proprietary products and knowledge offered through effective global service providers, tailored to specific customer needs.
  • Global yet local services: the special needs and custom requirements of remote customers must be handled locally, giving them the feeling of partnership and proximity.
To implementing these directions demands management and leadership abilities that are different from old, financially-driven models. In the global economy, automation companies have little choice – they must find more ways and means to expand globally. To do this they need to minimize domination of central corporate cultures, and maximize responsiveness to local customer needs. Multi-cultural countries, like the U.S., will have significant advantages in these important business aspects.

In the new and different business environment of the 21st century, the companies that can adapt, innovate and utilize global resources will generate significant growth and success.


Source:-http://www.automation.com/library/articles-white-papers/articles-by-jim-pinto/the-future-of-industrial-automation

Is it Time for New SCADA Software Technology?



Is it Time for New SCADA Software Technology?

About 13 years ago, a new software product was released for retail sale and within its first 5 years of existence more than 400 million copies were sold. Today, over 1 billion copies have been sold. And what was this hugely successful software product? Microsoft's Windows XP operating system for personal computers. A the time of its release, it was a significant upgrade over its predecessors in terms of performance and usability, and it was the most widely used operating system in the world for a full decade.

With all of the opportunities presented by today's data-driven enterprise, the time has come to consider whether your SCADA system is limiting your potential.

Then, in April of this year, Microsoft ceased extended support for this enormously popular product. No more product support or security updates would be available. Did Microsoft do this because they hated their millions of customers? Did they discover some long-overlooked defect that would render the product dangerous or unstable? No. They simply knew that better operating systems were available, and even though Windows XP was a wonderful product that served many people very well, its time had come and gone.

During XP's wonderful run, computer technology continued to evolve. Much more powerful processors were created. Faster communication interfaces were developed. Computers began to operate in ways that could not possibly have been considered when XP was developed all those years ago. And what is the point in buying a new computer with all of these fancy new capabilities if you are running an operating system that will treat your computer as if it were built a decade earlier? The fact is that taking full advantage of your new computer's speed and power requires a new operating system - an operating system designed for today's technology.

What Does This Have to do With SCADA?
There is a lesson to be learned here about SCADA software in today's industrial environment. Most SCADA systems in place today were deployed 7, 10, even 20 years ago! If we think about the way technology has changed in the last 20 - or even 10 years, it is preposterous to think that 10-year-old software is taking full advantage of the opportunities available. And not only has technology changed, but the very concepts that are fundamental to process automation have evolved beyond anything that would have been conceivable to a software developer 20 years ago. We are entering the era of big data and the industrial Internet of Things. There are more sensors and actuators on today's plant floor than SCADA developers would have thought possible 20 years ago.

A recent article by AutomationWorld's Jeanne Schweder investigates the changing industrial workplace and how existing SCADA systems are really holding companies back from taking full advantage of the opportunities available today. Per the article:

"Older SCADA systems were never designed to connect with the number of machines, sensors and other assets that manufacturers now want to monitor and control. Nor were they designed to handle the amount of data traffic and records these connections can generate. This lack of scalability, including the ability to access information through the Internet, can be a significant barrier to improving the quality and productivity of manufacturing processes."

The reality is that it doesn't matter what kind of fancy new equipment you install or data management strategies you implement if your SCADA software is operating with yesterday's technology as a limitation. Imagine buying a high-powered sports car with state-of-the-art technology and world class performance benchmarks. Then imagine taking the engine from a 20-year-old sedan with half of the horsepower and twice the emissions and using it to power your new sports car. Do you expect to get the maximum performance out of the car? The same top speed? The same acceleration? What about your gas mileage? Can we really expect any of the hardware to perform up to its potential?

Old SCADA technology can have the same sort of limiting effect on your automated processes, regardless of how smart your equipment or your management strategy is. The AutomationWorld article above provides some suggestions for comparing SCADA platforms. The suggestions include:

"...tools for HMI graphics that are easy to learn and let you become productive quickly; the ability to easily expand the system for facility changes and growing data needs; an open format such as SQL Server for data storage, which means you don't need to buy a third-party package for data analysis; and the ability to interface with software and hardware from multiple vendors."

If you evaluate your SCADA software and find that these criteria are not being met, it is time to seriously consider a change. And you may not like the cost of changing, but the opportunity cost of not changing is far greater, and changing to the right platform today will not only allow you to improve your production, but will make any additional or future projects faster, easier, and much less expensive.

Source:-http://www.automation.com/portals/process-automation/scada-rtu/is-it-time-for-new-scada-software-technology

Sunday, 9 November 2014

Advances in SCADA and RTU Technology for Next Generation Operators



By Randy Miller, Honeywell:

Much effort has been spent designing, implementing and testing Backup Control Centers for the purpose of business continuity in the face of major or minor disaster. While proven to be effective in a disaster, the full operational transition to a backup control center is unnecessarily disruptive and potentially exacerbates a minor incident if the incident can be avoided in the first place. Several recent technical and human factors advancements are now available for the automation of pipelines and adjacent process industries that enable the next generation of main control center operation. Significant reductions in the frequency of abnormal situations have been attained by understanding and addressing the root causes of all events that involve people, process and equipment. The result is best-in-class availability of the main control center and smooth, predictable response to all events, including low-frequency, high-impact events. These advancements are not bolt-on, custom applications, but rather integrated into the core SCADA solution and the core workflow of operations.

Abnormal Situation Management
Today’s pipeline regulations, such as the PIPES Act of 2006 and subsequent CFR changes came in place between 2006 and 2011. In anticipation of such increased expectations, the Abnormal Situation Management (ASM®) Consortium was formed in 1994, based on research started in 1989. The ASM Consortium is a group of leading companies and universities involved with process industries that have jointly invested in research and development to create knowledge, tools and products designed to prevent, detect and mitigate abnormal situations that affect process safety in the control and SCADA operations environment. By working together to understand and mitigate abnormal situations, fundamental improvements in safety, reliability and efficiency have been attained at an overall low cost to industry.

Root cause analysis in over 20 sites showed that equipment factors account for an average 36 percent of incidents. This includes degradation and failures in equipment, which are often preventable. Process factors account for an average 22 percent of incidents, including process complexity, types of materials and manufacturing,  and state of operation—steady state vs. startups, shutdowns and transitions.  These are mostly preventable. People account for an average 42 percent of incidents. The organizational structure, communications, environment, and documented procedures and practices play a role in operator response. These are almost always preventable. The majority of these incidents are due to the actions or inactions of people.

Over the course of the last 20 years of ASM research, 45 best practice design principles were developed, published and adopted by leading vendors and operators to fundamentally mitigate root causes across categories including equipment, process and people. Honeywell has adopted these principles as core offerings across our SCADA and related portfolio.

Effective Console Operator HMI Design Practices
The HMI that is compliant to ASM and API RP 1165 incorporates features developed from extensive consideration of human factors and cognitive research. Optimal operator situational awareness, minimized fatigue, rapid identification and response to abnormal situations are the primary goals of the ASM HMI. Several case studies have shown intuitive ASM displays to enable all operators to perform at the same responsiveness and consistency as the best operator. These attributes include:
  • Use of bright colors exclusively for alarms and critical process data drawing the operator/pipeline controller’s focus where it is needed
  • Animation that is used exclusively to bring process-critical or safety-related information to the foreground and to the attention of operators
  • Tabbed navigation linked with varied levels of detailed graphics with indication of active alarms
  • Pan and Zoom displays with a thumbnail view for situational awareness by including active alarms across the full display, not just what is currently in view
  • Advanced trending and graphics that promote rapid early event detection
  • Advanced shapes for temperature, pressure, level and flow values and control
  • Displays and trends that include the current target operating envelope so the operator/pipeline controller always knows where a variable should be for optimal performance rather than waiting for an alarm after you move across a boundary
Effective Alarm Management Practices
Effective management of alarms, particularly in alarm flood situations, is a key aspect of operator/pipeline controller effectiveness and the basis of alarm management recommended practices such as EEMUA Publication 191, ISA-18.2 and API RP 1167. Optimal alarm workflow includes:
  • Ability to filter, sort and add comments
  • Routing to other users via e-mail and SMS
  • Next generation alarm interface leveraging the innate benefits of processing patterns, dramatically reducing the time needed to diagnose and resolve upsets
  • Dynamic Alarm Suppression based on preconfigured rules
  • Alarm Shelving to temporarily remove problem alarms to avoid conflict with critical activities
  • Quick access to information on the cause of the alarm, the alarm impact potential and the recommended actions to address the alarm

Effective Procedural Practices
ASM research shows that incorrectly executed procedures contribute to many abnormal situations. Procedures are often complex and executed infrequently. Automating or semi-automating procedures addresses the inconsistency in procedure execution. The current best practice is to integrate procedural operations into the relevant SCADA displays, making it easy for operators to use compared with the standard operating practice (SOP) manual. These well-designed procedures capture the best operator practice, enabling all operators to perform the same way. Steps include manual changes that are confirmed by the operator and transitions that require the process to be in a particular state.

Shift Change
One of the most disruptive daily events in the control center is shift change. Inconsistencies in log books, communications and handover are common. It is not surprising that more abnormal events occur in the period following shift change than any other time during the shift. A best practice leverages an electronic logbook designed to log key information and facilitate effective handover communications. Tight integration with the SCADA system, alarm management and procedural operations makes it more likely that key information is not omitted and the transition to the next shift is smooth.

Managing multiple pipeline assets with Distributed System Architecture (DSA)
A common challenge in pipeline SCADA is managing large complex systems and incrementally scaling these systems over the lifecycle of the asset. It may be required to have multiple SCADA systems, such as one for each pipeline asset and one for each compressor station. Often, assets are acquired over time and a legacy of different brands of SCADA is brought into the enterprise. Attempts to integrate the SCADA from various assets and sub-stations are typically very constrained by generalized industry protocols and interfaces such as OPC.  Advancements in distributed system architecture provide multi-site, tight integration of clustered SCADA systems so that they function and appear to operators as a single, cohesive system. DSA supports zero engineering of remote tags, an integrated security model that retains individual user permissions, integrated alarms and acknowledgements, and efficient publish-subscribe algorithms. These advancements are enabled by a true global database for tags, alarms, functions and events that also supports seamless expansion and scalability from small point counts up to the world’s largest systems. DSA supports all permutations of hierarchical and peer control room strategies, as well as Backup Control Center. DSA is the best foundation for Collaborative Work Environment and Remote Operations strategies.

Improving Security of Pipeline Assets
Given the rise in number and increasing sophistication of attacks and threats, it is critical to have cyber security protection built in to the SCADA system rather than an afterthought added later. The old approach of building a hard shell with a soft core results in multiple avenues for outside attacks. The best practice starts at the core, embedding security into the infrastructure employing the same rigorous processes that are designed for safe industrial operations. In addition, the current state-of-the-art approach employs layers of proven solutions to strengthen industrial cyber security with a portfolio of security controls supported by a team of global experts and sustained by technology. Leading vendors are playing a key role in developing industry standards.

Effective security requires effective integrated physical security. Standalone security systems deployed across a pipeline pose a challenge to operators requiring them to access information from multiple systems when needed. Geographically distributed sites integrated into DSA allows autonomous security systems to communicate alarms and cardholder information, enabling multiple facilities to be operated in an efficient and consistent way across the entire organization without sacrificing the independence of each site. Effective integration into the operator’s SCADA HMI increases incident detection rates and improves response times during an incident or emergency, while reducing operator workload and dependency on manual actions by enabling automated actions in one system. In addition, digital video and analytics, tightly integrated with SCADA, can now allow cameras to function as process sensors. Digital video that is specifically designed to integrate at the database level embeds alarms, events and digital recording triggers natively to the control system, adding another dimension of situational awareness for improved response time and decision-making.

No pipeline is truly safe without holistic security. The final layer of defense is operator and station based security. Specific levels of access and permissions are assigned to individual operators/pipeline controllers based on responsibility. Complete operational integration of cyber Third Party Interference (TPI) protection, physical TPI protection, access control and operator security in a single dashboard provides the only truly integrated safety and security solution for the process automation industry, while not being overly intrusive to normal operations.

Simplified SCADA Configuration with Equipment Templating
Cost, schedule and management of change are key criteria when selecting a SCADA system including the initial configuration and expansion over time. Intersecting this trend is API RP 1168, “Recommended Practice for Pipeline Control Room Management”, where section 7 is on SCADA system management of change, including a configuration audit log. Equipment templates radically simplify all aspects by enabling configuration by equipment rather than by points. Through a simple template driven concept, templates can include all the related SCADA configuration for that equipment – all the points, any calculations, display elements, trend definitions, relationships, such as what is upstream/downstream, key parameters for this equipment,  operations task based filters, plus the SCADA communication setting for the RTU or PLC. It is then possible to configure a system by adding a single piece of equipment, requiring just a few details, instead of separately building many points and operator displays. Working with upstream oil and gas customers, Honeywell monitored how operators were managing their wells. It was observed to be a very labor intensive process for the tasks being performed and, ultimately, other tasks that could add more value were sacrificed. By using task based filters, configured as part of the equipment templates specifically for that equipment, finding the wells that needed attention could be completed in minutes instead of hours with true exception based monitoring. As best practice evolves, the task filters in the template just need updating. Due to the inherent consistency, every operator becomes your best operator.

RTU
A critical component of pipeline automation is integration with RTUs and other field devices. Many legacy RTU offerings currently used in upstream and pipelines are very dated. This motivated Honeywell to introduce a new RTU in 2014, engineered to be best in class, that complements our high availability and low long term cost SCADA platform. A few of the design objectives include lowest power consumption in its class, largest processing capacity, operating temperature range up to 75C, built-in HART I/O, very flexible communications and bulk automated configuration. The HART feature allows pipeline operators to deploy intelligent pipeline instruments that can be remotely diagnosed and maintained, as well as enabling more effective Reliability Centered Maintenance strategies.

Conclusions
Several technology advances inspired by two decades of fundamental abnormal situation research have been discussed. Collectively, these modern technologies and practices have been proven to reduce incidents in the main control center by more than 30 percent and sustain safe, optimal availability. Furthermore, capturing best operator practices in a procedural operations framework and equipment based templates helps retain best practices for future generations. Implementing these best practices in an infrastructure that supports bulk configuration and scales at pace with the growing operation, promotes minimal disruption over the lifetime of the asset.

About the Author
Randy Miller started his career in 1985 as an Instrumentation Journeyman working in the Judy Creek oil and gas field near Swan Hills, Alberta, Canada for Esso Resources Canada Limited. Here he was engaged in basic instrumentation, radio telemetry, chromatograph maintenance in the oil field and gas plant. After completing his B.Sc. and M.Sc. in Chemical Engineering at the University of Alberta in 1995, Randy then spent three years with Mitsubishi Chemical Corporation at their Development and Engineering Research Center in Mizushima, Japan. Here he led the control strategy analysis and design for several novel chemical processes. This work resulted in ten international patents and 15 publications. Since 1998 Randy has been with Honeywell Process Solutions in Thousand Oaks, California. At Honeywell, Randy has taken on many different roles in applied research, product development, product management, sales, business development and sales management. In his current role of global marketing director, he leads product portfolio strategy and business growth in the gas value chain.

http://www.automation.com/portals/process-automation/scada-rtu/advances-in-scada-and-rtu-technology-for-next-generation-operators

B-Scada to Supply HMI/SCADA Software to Indonesian Power Plant


B-Scada to supply Enterprise HMI/SCADA software to Indonesian Power Plant. B-Scada’s Indonesian partner, LDK, will be installing the software as a web monitoring solution for the new Senipah Gas Power Plant operated by PT. Kartanegara Energi Perkasa.

The Senipah Power Plant is an important part of the Indonesian government’s ongoing efforts to provide electricity to the rapidly growing East Kalimantan region of Indonesia. Today, approximately 70 million Indonesians live without electricity. The government plans to connect an additional 1.3 million households through the year 2025.

About B-Scada
B-Scada specializes in the compelling visualization of real-time data. Its visualization technology and SCADA products are deployed in manufacturing, power & utilities, transportation, petrochemical, building automation, and other fields of business where visualization of real time data is critical. B-Scada's in-house expertise and experience has provided them the opportunity to partner with companies from various vertical markets, and assist them to develop custom solutions that meet their specific needs.

B-Scada's goal is to help clients transfer real-time production and operational data into actionable information through graphically-compelling, functional, and intuitive user interfaces.      

Industrial Network Security for SCADA, Automation | Process Control and PLC Systems

THE WORKSHOP
This workshop will give you a fundamental understanding of security in effective industrial networking and data communications technology. It will also present you with the key issues associated with security in industrial communications networks and will assist managers, system operators and industrial data communications specialists in setting up secure systems.
On completion of the workshop you will have developed a practical insight into how to achieve optimum industrial network security for your organisation.
Topics covered include: introduction and terminology; firewalls; authentication, authorisation and anonymity; remote access to corporate networks; cryptography; VPN’s; data security; desktop and network security; security precautions in a connected world; and internet security.

WHO SHOULD ATTEND?
If you are using any form of communication system this workshop will give you the essential tools in securing and protecting your industrial networks whether they be automation, process control, PLC or SCADA based. It is not an advanced workshop – but a hands-on one. Anyone who will be designing, installing and commissioning, maintaining, securing and  troubleshooting TCP/IP and intra/internet sites will benefit including:
  • Design engineers
  • Electrical engineers
  • Engineering managers
  • Instrumentation engineers
  • Network engineers
  • Network system administrators
  • Technicians

CONTENT SUMMARY
DANGERS
  • Hackers
  • Viruses
  • Denial-of-service
  • Information leakage
  • File manipulation
  • Database access
  • Elevation of privileges
  • Spoofing
  • SYN flooding
  • Router attacks
  • Sniffing
SECURITY POLICIES AND ADVISORY SERVICES
  • Corporate policies
  • CERT
  • Audits
  • Threats
  • Vulnerabilities
  • Countermeasures
  • Disaster recovery
PHYSICAL SECURITY
  • Physical and logical access to networked equipment
  • Network segmentation
AUTHENTICATION
  • Authentication basics
  • Client-side certificates
  • Passwords
  • Smart cards
  • Tokens
  • Biometrics
  • PAP
  • CHAP
  • RADIUS
  • TACACS/TACACS+
ENCRYPTION
  • Symmetrical encryption schemes (DES, RC4)
  • Public-key encryption schemes (RSA)
  • Certificate Authorities (CAs)
PROXIES/FIREWALLS
  • Basic firewall operation
  • Natural Address Translation (NAT)
  • Firewall types (IP filtering, stateful inspection, proxy, DMZ) 
INTRUSION DETECTION SYSTEMS (IDSS)
  • Types
  • Deployment
ROUTER SECURITY
  • Administrator access
  • Firmware upgrades
  • Logging
  • Access Control Lists (ACLs)
SWITCH SECURITY
  • Administrator access
  • Port based MAC address management
  • ACL filtering
  • Virtual LAN (VLAN) implementation
VPNS
  • Virtual Private Network (VPN) concept
  • Tunnelling
  • L2TP
  • IPSec
  • SOCKS 5
WIRELESS LANS

  • Encryption and authentication - current problems and developments
  • IEEE 802.1x
  • WEP
  • WZC
  • WPA
  • AES
  • LEAP
  • EAP-TLS
  • EAP-TTLS