+44 (0)1332 898 999
24 / 7 Emergency Support
+31 (0)85 8888 863
NL office phone
support@key4ce.com
24 / 7 ticket support

Business Insights


New website & new projects in 2016

Introduction

It has been a while since my last update

Key4ce hasn’t been sitting still in all those months, we are working hard on releasing several new projects and some major version releases for existing projects.

  • New key4ce.com website in March
  • Key4ce osTicket Brige Major version release in March
  • Key4ce osTicket Brige paid version release in March
  • First SolidCP Release (Follow up of Websitepanel) in April

and more to come…

Key4ce Website

In my spare time i have worked for quite a while on the followup of Key4ce.com

It is finally (nearly) finished and should be released at the end of this week.

The site has some big visual improvements, and should be more user friendly.

In addition we have placed a new Privacy policy and Terms of Service agreements.

The billing site will be moved from:  https://key4ce.com/billing to: https://my.key4ce.com

Key4ce osTicket Brige

Key4ce osTicket Bridge will have a new (free) major version release which will include the following:

  • Canned responses
  • Internal notes
  • PHP 7 Capable
  • minor bug fixes

The release is expected in approxemently 2 weeks (mid- March)

Key4ce osTicket Bridge – Paid version

Key4ce osTicket Bridge will have it’s first paid version release in March.

It will launch with:

  • Knowledge base integration
  • Knowledge base ajax search
  • Able to select departments per wordpress site
  • Administration permission management
  • and more to come…

SolidCP – Multi server control panel

As WebsitePanel has been deliberately killed by its project manager we, and another few Dutch companies have decided to continue with a 100% opensource fork.

The initial release will have:

  • New (modern) responsive theme.
  • SolidCP (working) Installer with update capabilities.
  • install & upgrade documentation.

Next release (scheduled in end April:

  • Skype for Business support
  • Exchange 2016 support
  • Automatic update support
  • and more…

It will be well worth the wait, where we are confident that all the issues with WebsitePanel will be resolved. This includes STABLE tested releases, a proper development progress (Alpha, Beta, Stable) and regular stable releases to go with it.

This panel will stay 100% opensource and free!


Graphic - representation of Endpoint Security platform

Image Source: securosis.com

Endpoint security management has become an area of much concern in recent years for a number of reasons, on top which are the ever-increasing incidents of hactivism/hacktivism, APT, and malware attacks and the proliferation of personal smart phones, tablets and iPads that employees bring into the workplace (related article: Bright and Dark Spots of BYOC).

The “security” part of the phrase “endpoint security” may be obvious to all, but the “endpoint” part could use a little explanation. The endpoint is a device on a TCP/IP network, especially one that is connected to the Internet; it can be a laptop, desktop PC, network printer, POS terminal, tablet, smart phone, or tablet. A more traditional (albeit self-referential) description of the endpoint from wireshark.org is: “the logical endpoint of separate protocol traffic of a specific protocol layer.” The network endpoint as we know it is dead, according to a published Microsoft report.

Endpoint security, to borrow the words in the same Microsoft report, is “the security of physical devices which may literally fall into the hands of malicious users.” This is a simplistic definition, but it quickly brings home the point. Because traditional endpoint security management has become inadequate, Microsoft came up with general recommendations more applicable to present realities:

  • Develop a detailed plan for responding to a security incident, such as: social engineering attempt, DDoS attack against the network/specific hosts/applications, lost/stolen device, unauthorized use of system/network privileges or unauthorized account access, system-wide malware outbreak.
  • Pay attention to support infrastructure systems, such as routers, firewalls, and similar assets.
  • Identify the support persons to contact in case of endpoint security breach, and keep their contact details within easy reach.
  • Develop simple and effective response procedures for each category of security incident, and get input from users affected by it.
  • Keep abreast with emerging endpoint security technologies, and learn how to choose the one solution – among a myriad of offerings – that matches the requirement of a particular network environment.

In a separate study on the state of endpoint security in 2013, the Ponemon Institute made a list of more specific recommendations:

  • For BYOD – Create acceptable use policies.
  • For privileged users at the device level – Define governance policies on: use of corporate assets; installation and use of third-party applications; and use of privilege management software for control of third-party application installation and enforcement of change control processes.
  • On access of critical data stored in the cloud – Establish policies and procedures defining and stressing the importance of protecting sensitive/confidential information.
  • For overall endpoint risk management – Improve collaboration between IT operations and IT security for better allocation of resources and creation of strategies to mitigate hacktivism, BYOD, third-party applications, and cloud computing risks.
  • For endpoint security technologies – Choose an integrated endpoint security suite that has vulnerability assessment, device control, and anti-virus and anti-malware functionalities, after conducting risk assessments.

Like any other set of security measures, the foregoing recommendations don’t guarantee perfect endpoint security but ignoring one or more of them will weaken an organization’s endpoint security management. For example, not having a security incident response plan and related procedures will lead to panic and delay in taking proper action when a security incident does occur. Or, if BYOD practice is left to chance, mobile devices could serve as entry points of all kinds of attacks against the organization’s system; the Ponemon report said that 80 percent of their survey respondents admitted that laptops and other mobile devices that are capable of data access pose a significant security risk to their organization because they are not secure.

The dangers of weakly managed endpoint security often lead to major financial setback, negative legal implications, and loss of public/customer confidence on the organization.

Obviously, proactive steps can be taken before everything is too late.


Graphic - diagram of SWG implementation

Image Source: edgeblue.com

A secure web gateway (SWG), in its early rudimentary implementation, is a firewall built at the application layer that performs uncomplicated evaluation of preset rules in order to either allow web content to pass through, or block it. Web connected businesses originally put it in place to restrict employees’ use of the Web according to established company policy – for example, non-use of public email services like Hotmail or Yahoo Mail in corporate workstations – to protect the privacy of online business communication.

Since the beginning, Web access requires the use of a web browser application. Luckily for the users, browsers were made available to the public for free – and have remained free until now. However, the design of the early generations of browsers apparently overlooked three important considerations: the browsers were to be used in a basically unsecure environment; they were prone to running various types of untrusted code; and they could pass data around without checking for possible risks. Some undesirable elements of the Web community saw the weaknesses in browser design as an opportunity to do all sorts of mischief at the expense of well-intentioned users. Web security issues quickly entered the picture. In spite of new security features added into each release of updated web browser versions, security threats persisted.

As Web technologies rapidly developed within only a few years from the time the Worldwide Web first became available to the public, security threats also rapidly increased in number and in level of sophistication. This triggered a rapid evolution of SWG technology and ushered the birth of commercial SWG appliances. Enterprises that depend on the Web for their daily operations need the added level of protection SWG provides.

SWG appliances cope with current known security threats by coming up with appropriate counter-measures in the form of new features and controls; these include, but are not limited to:

  • support for data loss prevention (DLP)
  • improved URL filtering mechanism
  • better malware detection
  • behavioral analysis, data “fingerprinting”, content control, reputation analysis, browser code scanning in real time, and other types of analytics
  • expanded administrator control of Web/email/data traffic
  • analysis and control of dynamic web page elements
  • control of access to web services based on parameters such as time of the day or Web activity level
  • capability to adjust bandwidth utilization parameters

Enterprises looking for SWG solutions need to be aware that there are many SWG appliance vendors, and SWG appliances don’t have the same features and controls. They need to review product specifications that satisfy most, if not all, of their needs.


Graphic - ITIL (Information Technology Infrastructure Library)

Image Source: rogergrossi.com

If you are a big company that has the financial resources to maintain a full-blown IT department, you have probably been implementing the best practices documented in the information technology infrastructure library (ITIL), and this post may have little relevance for you.

This post is more appropriately for those who either have yet to know what ITIL is, or have probably heard about ITIL but haven’t given it much thought because they have not seen its significance in their particular situation.

ITIL is a framework of best practices to manage IT operations and services so that efficiency can be improved and predictable service levels can be achieved. Its history dates back to the mid-1980s when the UK government decided to document years of knowledge pooled from different people worldwide who managed IT helpdesks. The framework helps a business to establish a standard way of planning, selecting, delivering, and supporting IT services. It transforms the IT role from backend support to business service partner.

When we remember that ITIL is a library, it is easy to appreciate the fact that it includes five core books (or volumes/publications), each corresponding to a certain IT service lifecycle phase. The five books are: Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement.

The first volume in the library, ITIL Service Strategy, encompasses strategy and value planning plus related topics.

The second book, ITIL Service Design, presents guidelines for developing and maintaining IT policies, documents, and design architectures for service solutions and processes.

The third book, ITIL Service Transition focuses on long-term change and release management concepts/practices, and shows how to transition into a business environment.

The fourth volume, ITIL Service Operation, covers change management, application management, scalability, control and measurement, processes and function, and all activities necessary for daily operational excellence.

The final book, ITIL Continual Service Improvement, is about service quality within the continual improvement context, and also service retirement.

It takes time and patience (and possibly aptitude) to go through all the volumes in ITIL. How far you would go into its details depends on the scale of IT usage in your own business. One thing is sure though: you need to understand at least the basics of ITIL service support so that you will be prepared to use one of the IT helpdesk software products available in the market. The basics deal with five essential concepts: Incident Management, Problem Management, Change Management, Release Management, and Configuration Management Database (CMDB).

Incident Management deals with restoring IT services ASAP after the occurrence of an incident, which is a disruption of normal service that affects both the user and the business.

Problem Management helps you find the root cause of incidents and reduce their impact on the business.

Change Management is the process of coordinating IT changes within acceptable risk level and with minimal disruptions.

Release Management is the execution or implementation of the plan made in the course of the Change Management process. Its goals are user education and smooth implementation of changes.

CMDB is a repository of IT assets: hardware, software, documentation, and description of the relationship among the three.

A good resource for digging deeper into the basics just described is the <emITIL Whitepaper which is available for free download from the ManageEngine website.


Graphic - Linux server management tool

Image source: static.tenable.com

Does your network run mainly on a Linux server? Just like any other network server, your Linux system needs to be checked regularly to keep it in good shape. There are a few things you may be able to do without having to call for outside help.

One of the most important routines to keep in mind is to ensure that your operating system is always updated. Neglect of OS update is exposing your Linux server to security vulnerabilities and an invitation for hackers to attack your system. Linux system updates frequently deal with fixing of recently discovered vulnerabilities or otherwise shielding the system from emerging security issues. Because releases of Linux system updates come out frequently, it pays to turn the auto update option of the system on. However, even with auto update, there is still a need to check the system for security updates, and this means checking your email from Linux for release announcements of security patches or kernel updates.

A close cousin of OS update is application update. Outdated applications are also breeding grounds for security vulnerabilities, not mention performance degradation over time. You need to have a procedure, supplemented by relevant utility software, for receiving notifications for application updates and for actually installing the updates.

Since the risks of data corruption, inadvertent deletion, or even storage system crash are always present, the next important routine to consider is backups: system backup, application installation backup, and critical data file backup. This means that you need to have separate storage for this purpose and the storage needs to have adequate space. Once backups are in place, there is even a greater need to make sure that the backups are working; nothing is more frustrating than discovering that in the event of a system disaster, the backups could not be retrieved because they themselves are corrupted. There has to be a procedure for testing backups. In addition, backup location counts. Backups should not be kept in the same place as the production system because you could lose them along with the latter in the event of physical disasters like flood and fire.

Most likely, your Linux system uses RAID storage system (please click here for more information). You should ascertain that the RAID notification system is properly configured and verify that it works. While RAID improves data I/O performance, it also presents the risk of major data loss if a RAID notification escaped your attention or had not been acted upon right after reading.

If your Linux server is on-premises, keeping an eye on hardware error conditions could save your system from major trouble. The server logs report on status of different parameters like network performance, disk read and write, power saving, overheating, disk/CPU/RAM utilization; you should check them for error notices and serious deviations from certain benchmarks or acceptable value ranges in order to guard against hardware failure.

On the other hand, if your Linux server is co-located in a certain data center or is otherwise externally sourced, you should have at your disposal utility software for remote console, remote rescue, and remote reboot. These help you manage your system from a remote location and respond promptly when some emergency action needs to be taken.

To further ensure uninterrupted performance, you must audit your server security on a regular basis — for example, monthly or quarterly. An audit helps you discover potential threats arising from such conditions as outdated OS, improper system configuration, and irregular activity occurring in the system. Tools like rootkit detection utilities and software products such as OpenVAS and Nessus are available for this purpose.

The points that have been discussed do not cover a comprehensive range of measures you can take to keep your Linux server in A-1 condition, but they do help you take the steps in the right direction.


Infrastructure, in the context of IT servers, is simply organized server support. It refers to how the servers are physically, logically, and/or functionally grouped together and includes the tools (mostly provided by vendors and occasionally custom-made by the system administrator) for supporting them. The infrastructure only works if the component servers are correctly set up, its services are properly managed, and its operations are diligently monitored. In the IT world, an infrastructure necessarily involves many servers, and for this reason there must be tools to keep the servers running.

Because various types of servers and thousands of vendors providing them exist, there is consequently a great variety of tools available. The categories of tools include: application deployment and management, configuration and change management, cluster management, network administration, web systems management, system performance testing, user management, security control, patch and update management, storage management, backup/restore and archiving, disaster recovery, IT asset and inventory management, license management. The list, although lengthy, is only a tiny fraction of product category offerings that vendors have placed in the market, but it illustrates the idea of how much is required to run a server infrastructure.

Knowing what tools are available is not enough. To be able to choose the appropriate tools and use them effectively, the IT professional in charge of running the servers must be thoroughly knowledgeable of: what the infrastructure elements are generally used for; the hardware and software within the infrastructure, and how they are configured; the location of the infrastructure components. Despite this knowledge, however, the choice of tools may be affected by such constraints as difficulty in comparing products, justifying ROI, and budget.

The task of running servers — particularly numerous servers — is never easy, and to go around any constraint to acquire the needed tools is something an administrator absolutely needs to do.


graphic representation of converged infrastructure

Image Source: HP

The idea of converged infrastructure revolves around forming a single optimized IT package by putting together several components to meet present-day business needs. According to HP, converged infrastructure meets these needs “by bringing storage, servers, networking, and management together – simply engineered to work as one.” The end result is interoperability of IT components using resource pools based on a common platform. Convergence embraces all target resources in one shot, not on a piecemeal basis.

Network performance, supported by uniform applications and resources, is critical for making infrastructure convergence work; this is especially true for convergence implemented in a virtual environment. Virtualization, as the experts tell us, is the starting point of convergence. There is one interesting observation regarding the relationship between networking and convergence: although the network may be a target for convergence, it is at the same time the “connecting tissue” that binds the physical resources to the central abstraction upon which virtualization is based.

However successful convergence also depends on other factors, such as the continuing evolution of NaaS (Network as a Service, which is a cloud service) and SDN (software defined networking).

Infrastructure convergence can be achieved at the network and cloud levels. In fact, the evolution of the cloud has helped make the idea of fully converged infrastructure a reality. The cloud creates the need for abstraction of certain IT components such as servers, storage systems and network connections into virtual resources. Users manage these virtual abstractions through APIs (application programming interfaces). Using the network, the APIs distribute resources from a pool of various hardware elements to applications.

NaaS is a concept founded on two network missions created by the cloud: one, to connect the items comprising the resource pool collectively called the cloud: application, compute, and storage elements; and two, to support the connectivity needed by applications.

SDN is a service to support the need of NaaS. Three approaches to SDN have been developed: virtual overlay network (a.k.a. SDN overlay network), centrally controlled SDN, and non-centrally-controlled SDN.

In simple terms, the virtual overlay network allows NaaS to substitute for a traditional VPN (virtual private network). This is also a multi-tenant model that has partitioned services for all users and applications.

Centrally-controlled SDN enables a software controller using the OpenFlow protocol to manage network traffic by creating appropriate rules in every device. All aspects of traffic management and connectivity are directed by software.

The non-centrally-controlled SDN model seeks to achieve the benefits of OpenFlow-based SDN minus the burden of a control function to direct all connections and traffic. Instead of focusing on changes in network technology, this model prefers APIs for software control. By building on current network practices and protocols, this model can converge existing network devices into future SDNs.

Market observers have noted indications that a unified SDN that includes the three models mentioned has already been in the works.


At least three serious vulnerabilities in the Network Time Protocol (NTP) have been uncovered by security researchers at Google in 2014. This is quite serious because of the sheer number of computers that may potentially be targets of DDoS (click here to read earlier blog) attacks exploiting the said vulnerabilities. The NTP Project, which continues to develop the Protocol Standard since it was first published in 1985, produces the specifications of software and protocol behind the clocks running in tens of millions of computers worldwide.

One of the reported vulnerabilities is a multiple stack-based buffer overflow flaw identified as CVE-2014-9295. Using a cleverly crafted packet sent from a remote location, a hacker can trigger the flaw and execute malicious code in the target system. The privilege level of this code is the same as that of the ntpd process. The flaw is present in ntpd releases before version 4.2.8, according to the National Vulnerability Database website of NIST. The NTP Project released ntpd version 4.2.8 on December 18, 2014 in response to this particular reported vulnerability.

Another flaw discovered by the Google researchers is the generation of cryptographically weak authentication keys by NTP which could give rise to multiple problems.

There is also a vulnerability caused by missing return on error. The ICS-CERT website describes this flaw as follows: “In the NTP code, a section of code is missing a return, and the resulting error indicates processing did not stop. This indicated a specific rare error occurred, which does not appear to affect system integrity. All NTP Version 4 releases before Version 4.2.8 are vulnerable.”

The flaws represent an opportunity for attackers, including those with low skill level, to potentially compromise systems using NTP Version 4 releases earlier than version 4.2.8.

To mitigate the potential security threat from the flaws, users have been strongly urged to act immediately to ensure that the NTP daemons (ntpd) used in their systems are not to vulnerable to DDoS attack. The recommendation in the NTP website is to defeat DDoS attacks by implementing Ingress and Egress filtering through BCP38.

Of course, users should also install NTP Version 4.2.8 if they have not yet done so.


Intrusion detection systems (also called ID systems and IDS) have been, and are still constantly being developed in response to past and present attacks on many high-profile websites and networks, including those of Sony, eBay, Yahoo Mail, Google, Apple iCloud, UPS, JP Morgan, NASA, the White House, NATO, the Pentagon, and the U.S. Defense Department.

IDS is a type of security management system for computers and networks. It is a network security technology that was originally designed for detecting exploits against target computers or applications that took advantage of certain vulnerabilities. At present IDS is designed to detect many other modes of compromising the security of any IT network system. It is often a package consisting of hardware and/or software systems that automate the process of monitoring events that occur in a computer network or system or network so they can be analyzed for symptoms of security problems.

The basic functions of an IDS are gathering and analyzing information from various parts of a computer or a network for the purpose of identifying possible security breaches in the form of attacks from external origins and abuse/misuse from within an organization). To assess the security of an IT or network system, IDS often uses a method called “scanning”, or vulnerability assessment.

A two-step process may be found in an IDS, one a passive component and the other, active. What takes place in the passive component are inspections of the configuration files, password files (to detect weak passwords), and policy audit logs (to detect violations) in a system. In the active component, which is network-based, reenactments of known attack methods take place using installed mechanisms and recordings of system responses to attack reenactments are made. From these processes certain data are captured, usually from packets passing through the system, and reported for subsequent analysis. Hopefully, appropriate steps to counter one or more discovered threats can be taken based on the results of IDS output analyes.

So, who needs an IDS? Everyone who uses a networked IT system needs it because everyone is a potential target of attacks coming from different sources, near or far. However, everyone must be aware of the fact that there is not a single, universal IDS that fits all needs. An individual or business enterprise needs to know the type of IDS that is appropriate to one’s circumstances, and this may be difficult to do in many cases because it needs a high degree of technical know-how. Network and IT security experts can help in reviewing one’s need for an IDS and designing a solution that matches the need.

 


“More creative and varied”, this is how Brad Casemore describes the current nature of DDoS attacks in a recent article in the TechTarget website. Casemore, who is research director at International Data Corporation (IDC), said that the burden is on the shoulders of IT product/service vendors to come up with improved solutions for detection and mitigation of threats like DDoS.

The need for such solutions becomes even greater with the growing trend of encrypting network traffic, which increases the likelihood of abuse by hackers and create yet another vulnerability to security threats. This is the observation of Paul Nicholson, product marketing director at A10, a company that provides application networking technologies focusing on optimized performance of data center applications and networks.

What A10 has done lately is to produce what they call an anti-DDoS appliance branded as Thunder TPS (threat protection system). This product may be relevant only to large data centers at this time because this is apparently the user category A10 primarily had in mind when they designed Thunder TPS. Whatever. The important thing to note is that the idea of anti-DDoS appliance has been implemented and is now in the market.

Making data centers the environment model for Thunder TPS has been influenced by the escalating incidences of complex DDoS attacks against data centers and large enterprises as a whole. This is a blessing for the user community because, as it turns out, the resulting product implements a two-pronged approach to threat mitigation: breadth of attacks and size of attacks.

Like all other existing technology products designed for contending against security threats, Thunder TPS is not invincible. “Really big attacks could overwhelm it,” says security analyst Adrian Sanabria of 451 Research. Sanabria recommends pairing Thunder TPS with “something cloud-based or upstream”.

Nicholson gave some insights into the DDoS appliance’s attack prevention measure. Thunder TPS comes bundled with software that allows users to block attacks flexibly. Users can use regular expression rules; they can also program rules using the product’s aFlex tool.

In addition, Thunder TPS features “more robust SSL protection to validate whether clients attempting to access the network are legitimate or part of a botnet” (to use Nicholson’s words). The appliance can detect the presence and identity of potential threats through its access of “more than 400 destination-specific behavior counters”. Its software enables inspection of MPLS-encapsulated traffic and use of NAT (network address translation) as alternative to tunneling when the appliance moves sanitized traffic to other parts of the network.

Considering that Thunder TPS is data center oriented, users can expect that it is not a plug-and-play affair. They are likely to need their in-house IT experts to coordinate with the Thunder TPS deployment team, plus the help of external IT professionals if necessary.


Page 1 of 3123