Which of these is a common method for bypassing application layer proxy firewalls

Attack Detection and Defense

Brad Woodberg, ... Ralph Bonnell, in Configuring Juniper Networks NetScreen & SSG Firewalls, 2007

Understanding Application Layer Gateways

Application Layer Gateways are algorithms within ScreenOS that handle dynamic firewall policies that certain protocols require, such as FTP. Many such protocols were designed without security or other access controls in mind, which can cause problems when firewalls are introduced.

For example, FTP uses multiple sessions to facilitate file transfers—a primary command channel, and secondary data channels for directory listings and file transfers. Often, these data channels will flow in a direction opposite that of the original command channel. Since these data channels could connect on any port, it's almost impossible to create a static firewall policy that would permit these data channels and yet still provide adequate protection.

The FTP ALG automatically solves this problem by monitoring the FTP command channel, looking for FTP port commands that specify which source and destination ports are being requested, and dynamically opening up that specific combination of source IP/port and destination IP/port firewall policy (called a gate) that permits the session to flow. Once the session is complete, the gate is immediately closed.

The FTP ALG also handles the special case where the FTP session flows through a NAT interface. In this circumstance, the endpoints don't always realize their addresses are being translated midstream. The FTP port commands use whatever IP the endpoint hosts’ interfaces are configured for, which, in the case of a host behind a NAT firewall, will typically be unreachable from the Internet.

The ALG handles this at the application layer by modifying the ASCII port command insitu, replacing the inside IP with the IP of the NAT interface. Since port commands are passed as ASCII text, including the IP address, the chances are high that the number of characters that represent the inside IP and the external IP won't exactly match (for example, an inside address of contains 11 characters, which may be translated to something like at 15 characters, or something like, which contains only 7). The firewall cannot inject these extra bytes of data without modifying the TCP checksum as well as the TCP sequence numbers. It achieves this by essentially proxying the connection at the TCP layer. This is similar to the SYN proxy feature used by the TCP flood SCREEN setting.

NetScreen ALGs are different from many competitors’ products. Several other firewall vendors utilize full protocol proxies, which themselves are vulnerable to attack, misconfiguration, or protocol obsolescence as new commands, options, and features are added to a protocol. CheckPoint's FireWall1 uses tiny proxies to validate data on protocols like HTTP, FTP, and SMTP. While this method is very flexible, it can still cause problems if the proxy encounters a valid command it has not been programmed to handle. This can cause the session to break since the proxy won't forward what it thinks is an invalid command. Furthermore, since the firewall is participating in the stream at the application layer, it's very possible (and has even happened) that the proxy itself is vulnerable to a security concern. Since FireWall1 runs on Windows, Linux, and Solaris, shellcode for these platforms is relatively easy to find. NetScreen firewalls do not participate in the exchange at the application layer, which isolates them from these sorts of attacks.

Some protocols just don't support being proxied. Microsoft's Server Message Block and Remote Procedure Call both require a real endpoint connection. While these are not commonly Internet-transiting protocols, a good defense-in-depth strategy would still have this traffic flowing through firewalls that need to know how to handle it. A new ALG found in ScreenOS 5.1 allows users to filter at the application layer for MS-RPC by parsing globally unique identifiers (GUIDs)—a unique 128-bit number used by Microsoft to label process endpoints. Custom-defined services are created based upon GUIDs, which are then used in a policy. This enables you to create policies that allow or prevent access to individual processes on a Windows system. This is very handy for protecting from attacks such as Blaster, Sasser, Agobot, and others that use MS-RPC as one of their attack vectors.

Others vendors tend to cut corners and, for the sake of performance, will make a very simple ALG-like algorithm that should solve a problem, but has unexpected consequences. Just recently, Symantec issued a security update for its DNS ALG. Apparently, the DNS ALG worked like so: if a UDP packet arrived with a source port of 53, it was a DNS reply to a DNS request that had already gone out through the firewall, and would permit the packet through without any session lookup. The ALG would also bypass any incoming policy explicitly blocking the packet, such as destination port, destination IP, or source IP. The flaw in the firewall was so fundamental it would even bypass protections designed for its own management interface. When this oversight was made public, hackers discovered that by sending management packets to the Simple Network Management Protocol (SNMP) port on the firewall from UDP port 53 they could successfully command the firewall and change its settings without being authenticated. A patch was later released. ScreenOS features are subject to rigorous security reviews at various stages of the development process to avoid fundamental logic flaws such as this.

ScreenOS currently has 26 ALGs, including FTP, DNS, and H.323, with more being released with every new version. These ALGs require little to no configuration to operate properly. They automatically detect appropriate traffic on the registered ports for the protocol they handle and then do their jobs. As mentioned earlier, these ALGs can be reapplied to arbitrary ports using custom service objects as needed.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597491181500125

Logically Segregate Network Traffic

Thomas Porter, Michael Gough, in How to Cheat at VoIP Security, 2007

Medium-Depth Packet Inspection

Application layer proxies or gateways (ALG) are a second common type of firewall mechanism. ALGs peer more deeply into the packet than packet filtering firewalls but normally do not scan the entire payload. Unlike packet filtering or stateful inspection firewalls, ALGs do not route packets; rather the ALG accepts a connection on one network interface and establishes the cognate connection on another network interface. An ALG provides intermediary services for hosts that reside on different networks, while maintaining complete details of the TCP connection state and sequencing. In practice, a client host (running, for example, a Web browser application) negotiates a service request with the AP, which acts as a surrogate for the host that provides services (Web server). Two connections are required for a session to be completed—one between the client and the ALG, and one between the AP and the server. No direct connection exists between hosts.

Additionally, ALGs typically possess the ability to do a limited amount of packet filtering based upon rudimentary application-level data parsing. ALGs are considered by most people to be more secure than packet filtering firewalls, but performance and scalability factors have limited their distribution. An adaptive (coined by Gauntlet), dynamic, or filtering proxy is a hybrid of packet filtering firewall and application layer gateway. Typically, the adaptive proxy monitors traffic streams and checks for the start of a TCP connection (ACK, SYN-ACK, ACK). The packet information from these first few packets is passed up the OSI stack and if the connection is approved by the proxy security intelligence, then a packet filtering rule is created on the fly to allow this session. Although this is a clever solution, UDP packets, which are stateless, cannot be controlled using this approach.

Although current stateful firewall technologies and ALGs provide for tracking the state of a connection, most provide only limited analysis of the application data. Several firewall vendors, including Check Point, Cisco, Symantec, Netscreen, and NAI have integrated additional application-level data analysis into the firewall. Check Point, for example, initially added application proxies for Telnet, FTP, and HTTP to the FW-1 product, but have since replaced the Telnet proxy with an SMTP proxy. Cisco’s PIX fix-up protocol initially provided for limited application parsing of FTP, HTTP, H.323, RSH, SMTP and SQLNET. Both vendors since have added support for additional applications. To sum up, the advantages of ALGs is that they do not allow any direct connections between internal and external hosts; they often support user and group-level authentication; and they are able to analyze specific application commands inside the payload portion of data packets. Their drawbacks are that ALGs tend to be slower than packet filtering firewalls, they are not transparent to users, and each application requires its own dedicated ALG policy/processing module.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597491693500098

Information Security

Jeremy Faircloth, in Enterprise Applications Administration, 2014

Application Layer Gateways/Web Application Firewalls

The second firewall technology we’ll look at was originally called application filtering or an application layer gateway and later called next-generation firewalls (NGFWs). Over time, this technology evolved into a more web-based application concept and morphed into web application firewalls. This technology is much more advanced than packet filtering as it examines the entire packet and determines what should be done with the packet based on specific rules that have been defined. For example, with an application layer gateway, if an HTTP packet is sent through the standard FTP port, the firewall can determine this and block the packet if a rule is defined disallowing HTTP traffic. It should be noted that this technology is used by proxy servers to provide application layer filtering to clients going through the proxy.

With web application firewalls, even more protection is offered as the firewall is able to scan inside each packet to see if there is content that matches specific attack signatures. This is a blend of technologies in that it combines concepts associated with intrusion detection along with concepts associated with firewalls. This blend provides a substantial amount of flexibility that can help support the “defense in depth” practice very well.

One of the major benefits of application layer gateway technology is its application layer awareness. Since it can determine much more information from a packet than a simple packet filter can determine, it can use more complex rules to determine the validity of any given packet. These rules take advantage of the fact that application layer gateways can determine whether the data in a packet match what is expected for data going to a specific port. For example, the application layer gateway (in the form of a web application firewall) would be able to tell if packets containing controls for a Trojan horse were being sent to the HTTP port (80) and block them. Based on this, it provides much better security than a packet filter.

In addition to what application layer gateways can do, some NGFWs also have the ability to perform user-based policies by mapping users to local IP addresses and integrating LDAP lookups to define roles. An example of this would be allowing the human resources department to have access to run an HR-specific web-based application and have the ability to visit Facebook and LinkedIn, while other users would not be allowed to.

While the technology behind application layer gateways is much more advanced than packet filtering technology, it does come with drawbacks. Due to the fact that every packet is disassembled completely and then checked against a complex set of rules, application layer gateways are much slower than packet filters. Since application layer gateways actually process the packet at the application layer of the OSI model, the application layer gateway must deconstruct every packet and then rebuild the packet from the top down and send it back out. This can take quite some time when a lot of traffic is being processed.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124077737000053


Jeremy Faircloth, in Enterprise Applications Administration, 2014


A firewall is the most common device used to protect an internal network from outside intruders. When properly configured, a firewall blocks access to an internal network from the outside (ingress filtering) and blocks users of the internal network from accessing potentially dangerous external networks or ports (egress filtering).

There are three primary firewall technologies to be aware of as an enterprise applications administrator:

Packet filtering

Application layer gateways

Stateful inspection

A packet filtering firewall works at the network layer of the OSI model and is designed to operate rapidly by either allowing or denying packets. An application layer gateway operates at the application layer of the OSI model, analyzing each packet and verifying that it contains the correct type of data for the specific application it is attempting to communicate with. A stateful inspection firewall checks each packet to verify that it is an expected response to a current communications session. This type of firewall operates at the network layer, but is aware of the transport, session, presentation, and application layers and derives its state table based on these layers of the OSI model. Another term for this type of firewall is a “deep packet inspection” firewall indicating its use of all layers within the packet including examination of the data itself.

To better understand the function of these different types of firewalls, we must first understand what exactly the firewall is doing. The highest level of security requires that firewalls be able to access, analyze, and utilize communication information, communication-derived state, application-derived state, and be able to perform information manipulation. Each of these terms is defined below:

Communication information—Information from all layers in the packet

Communication-derived state—The state as derived from previous communications

Application-derived state—That state as derived from applications

Information manipulation—The ability to perform logical or arithmetic functions on data in any part of the packet

Different firewall technologies support these requirements in different ways. Again, keep in mind that some circumstances may not require all of these, but only a subset. In that case, the administrator will frequently go with a firewall technology that fits the situation rather than one that is simply the newest technology.

Firewall Rules

The defined instructions that are used by the firewall to determine what to do with specific traffic are called firewall rules. These rules in a basic firewall (packet filtering) identify the packet source IP and port, destination IP and port, and the definition of what to do with the traffic. Should it be allowed to pass? Denied? Or, if using a firewall as part of the intrusion detection system, should an alert be sent indicating that there is a potential intrusion?

Rules for application layer gateways or stateful inspection are more complex and add more criteria that can be used for identifying the type of traffic or what its intent is. For example, rules can be put in place to capture attempts at directory traversal (strings like “../../../../” in the URL) and drop those packets so that they never even make it to the web server.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124077737000028

The Hardware Infrastructure

Thomas Porter, Michael Gough, in How to Cheat at VoIP Security, 2007

Firewalls and Application-Layer Gateways

Within a firewall, special code for handling specific protocols (like ftp, which uses separate control and data paths just like VoIP) provides the logic required for the IP address filtering and translation that must take place for the protocol to pass safely through the firewall. One name for this is the Application Layer Gateway (ALG). Each protocol that passes embedded IP addresses or that operates with separate data (or media) and control streams will require ALG code to successfully pass through a deep-packet-inspection and filtering device. Due to the constantly changing nature of VoIP protocols, ALGs provided by firewall vendors are constantly playing a game of catch-up. And tests of real-time performance under load for ALG solutions may reveal that QoS standards cannot be met with a given ALG solution. This can cause VoIP systems to fail under load across the perimeter and has forced consideration of VoIP application proxies as an alternative.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597491693500037

Network Address Translation/Port Address Translation

Eric Knipp, ... Edgar DanielyanTechnical Editor, in Managing Cisco Network Security (Second Edition), 2002

Application Level Gateways

Not all applications are easily translated by NAT devices. This is especially true of those that include IP addresses and TCP/UDP ports in the data portion of the packet. Simple NAT may not always work with certain protocols. This is why most modern implementations of NAT include Application Layer Gateway functionality built in. Application Level Gateways (ALGs) are application-specific translation agents that allow an application on a host in one address realm to connect to another host running a translation agent in a different realm transparently. An ALG may interact with NAT to set up state, use NAT state information, alter application-specific data, and perform whatever else is necessary to get the application to run across different realms.

For example, recall that NAT and PAT can alter the IP header source and destination addresses, as well as the source and destination port in the TCP/UDP header. RealAudio clients on the “inside” network access TCP port 7070 to initiate a conversation with a RealAudio server located on an “outside” network and to exchange control messages during playback such as pausing or stopping the audio stream. Audio session parameters are embedded in the TCP control session as a byte stream. The actual audio traffic is carried in the opposite direction (originating from the RealAudio server, and destined for the RealAudio client on the “inside” network) on ports ranging from 6970 to 7170.

As a result, RealAudio will not work with a traditional NAT device. One workaround is for an ALG to examine the TCP traffic to determine the audio session parameters and selectively enable inbound UDP sessions for the ports agreed upon in the TCP control session. Another workaround could have the ALG simply redirecting all inbound UDP sessions directed to ports 6970 thru 7170 to the client address on the “inside” network.

ALGs are similar to proxies in that both ALGs and proxies aid application-specific communication between clients and servers. Proxies use a special protocol to communicate with proxy clients and relay client data to servers and vice versa. Unlike proxies, ALGs do not use a special protocol to communicate with application clients, and do not require changes to application clients.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978193183656250009X

Microsoft Vista: Networking Essentials

In Microsoft Vista for IT Security Professionals, 2007

Limited Address Space

IPv4 has a much more limited address space than the IPv6 standard. IPv4 has a 32-bit address space, which most of us are familiar with seeing in dotted decimal notation, written like this: What this actually represents is a binary string that is 32 digits long (and thus a 32-bit string), where each of those four numbers is translated into binary. So, an IP address that is expressed using dotted decimal notation as would be represented in binary as follows: 11000000101010000000001010101000.

By using a 32-bit address space, IPv4 has a theoretical upper bound of 4,294,967,296 possible IP addresses. However, the actual number of available IPv4 addresses is limited by some implementation decisions that were made in the early days of IPv4’s use as a protocol. The most striking example of this is the loopback address range, where all IPv4 addresses in the 127.x.x.x space have been reserved for troubleshooting and testing purposes: By making the familiar address available as a troubleshooting tool, the originators of the IPv4 standard eliminated 16,777,216 IP addresses in one fell swoop.

The number of available IPv4 addresses has also been limited by some early decisions in doling out IPv4 IP addresses to various organizations that were early adopters of the Internet. For some historical perspective on this, remember the following: Before the mid-’90s, the Internet was primarily an educational tool that was funded by the National Science Foundation and populated only by large research universities and corporations such as MIT and Bell Labs. Not foreseeing the commercial explosion that would take place with the rise of e-commerce many years later, IP addresses were doled out quite generously to early Internet residents because there seemed no danger of ever running out: A single research university in the United States might have more IP addresses assigned to it than an entire European nation, for example. For both of these reasons, the actual number of available IPv4 addresses is closer to a few million rather than upward of 4 billion.

IPv4 has thus far staved off being relegated to legacy status by the active use of a network address translator, or NAT. This has allowed latecomers to Internet society to conserve the available reserves of IP addresses by relying on private IP addresses for the majority of their networking needs. RFC 1918 defines the following private IP address ranges that have been reserved from the IPv4 space for use by private organizations: through through through

This RFC defined private IP addresses to meet both of the following needs (as noted at www.faqs.org/rfcs/rfc1918.html):

“Hosts that do not require access to hosts in other enterprises or the Internet at large; hosts within this category may use IP addresses that are unambiguous within an enterprise, but may be ambiguous between enterprises.”

“Hosts that need access to a limited set of outside services (e.g., e-mail, FTP, netnews, remote login) which can be handled by mediating gateways (e.g., application layer gateways). For many hosts in this category an unrestricted external access (provided via IP connectivity) may be unnecessary and even undesirable for privacy/security reasons. Just like hosts within the first category, such hosts may use IP addresses that are unambiguous within an enterprise, but may be ambiguous between enterprises.”

In other words, private IP addresses are intended for use by both personal and business computers that don’t necessarily need to possess a public IP address all their own. These will typically be desktop computers that are being used to access Internet resources such as e-mail, the World Wide Web, and so on, using a NAT device that can provide Internet connectivity under these conditions without expending a public IP address for each connected computer.

NAT to the Rescue?

Devices configured with private IP addresses can communicate on the Internet by using a NAT device such as a router or proxy server. A NAT device will take packets sent from a device configured with a private IP and then, as the name suggests, translate those packets using a public IP address assigned to the NAT device. For example, say that you have a workstation on a corporate network that is configured with the private IP address of workstation wants to communicate with a Web server on the Internet that is configured with an IP address of When the private computer transmits packets to the destination Web server, the source and destination addresses read as follows:

Destination address:

Source address:

The private computer does not transmit these packets directly to the destination Web server, however; the traffic is instead transmitted to a NAT device configured with a public IP address of NAT device then removes the private source address, replaces it with a public IP address, and then retransmits the packets to the destination host as follows (the NAT device maintains a translation table to keep track of inbound and outbound traffic that it is translating for multiple hosts on the private network):

Destination address:

Source address:

When the destination Web server transmits information back to the workstation, the destination address is not the private IP address, but rather the public address of the NAT device. (The destination Web server typically does not even realize that a NAT device is involved in the conversation.) So, the return headers will look like this:

Destination address:

Source address:

Based on the information that it recorded in its translation table, the NAT device will receive this information and then translate the destination address to that of the computer on the private network:

Destination address:

Source address:

Although the use of NAT has been a boon for conserving available addresses within the IPv4 32-bit space, it is not the panacea that it might seem at first glance. The use of NAT for network applications can create performance issues as packets are bottlenecked into a single NAT device on one or both ends of the communication, and certain applications and protocols do not function reliably (or sometimes at all) if they are required to transmit through a NAT device. For example, many devices are unable to route IPSec-secured traffic (such as that used for a virtual private network [VPN]) through a NAT device because the headers are encrypted, thus rendering NAT unable to modify the source and destination addresses without breaking the encryption process. This has largely been addressed by the use of NAT-Traversal (NAT-T), but this is unfortunately not a universal solution because not all NAT devices conform to a single standard. This “NAT Traversal” problem has also reared its head in the case of peer-to-peer applications used for file sharing and Voice over IP that cannot always communicate through NAT devices.

Notes from the Underground…

Calling It “Private” Doesn’t Necessarily Make It So…

These IP address ranges are considered private because they are not typically passed along beyond the boundary of a router; however, that doesn’t mean that they can’t be. The key point to remember here is that an RFC is a standard, not a technical requirement; an attacker can choose to not “play by the rules” of the RFC if that works to his advantage in attacking a network.

Although most routers and other Internet-connected devices will not pass traffic to or from the RFC 1918 address spaces by default, these devices can often be manually configured to allow an attacker to do so. Either through maliciousness or through someone misconfiguring an Internet-connected device, you can often find routes to so-called private address spaces being advertised to and from routers attached to the Internet. And although this provides an attack vector that a malicious user can exploit, it’s an attack vector that can be fairly easily mitigated. Particularly if you are protecting Internet-facing machines that are configured with public IP addresses, your border router can quite easily assist you in some of the “heavy lifting” associated with Internet security if you configure it with the following set of (fairly simple) rules:

Drop any inbound traffic that is not destined for a host on the internal network For example, if your router is in front of computers with IP addresses ranging from to, why would you accept any traffic that had been addressed to This seems like common sense when it’s spelled out, doesn’t it? But many routers can be configured with an inbound routing rule of “Accept all inbound traffic,” rather than “Accept inbound traffic destined for IP addresses within my network.”

Drop any outbound traffic that does not originate from a host on the internal network This is the preceding rule in reverse: If my router is in front of computers with IP addresses from through, why would I want to transmit any outbound traffic that originated from a computer with an IP address not in that range?

Drop any traffic (inbound or outbound) destined for the RFC 1918 ranges As we’ve been discussing, these IP ranges should not be routed across the Internet under any circumstances; any traffic that your router receives that is destined for these IP addresses was sent through either maliciousness or misconfiguration. Please note here that we’re not referring to dropping NAT traffic if your network is configured for it: We’re not talking about traffic that is destined for a NAT device that will then be retransmitted to an internal computer on your network. We’re referring to traffic that actually has a Destination Address field within one of the three RFC 1918-defined address ranges.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597491396500108

Office – Macros and ActiveX

Rob Kraus, ... Naomi J. Alpern, in Seven Deadliest Microsoft Attacks, 2010

Macro and ActiveX Defenses

The bad news is that macro and Active X attacks are a class of attacks, which are both popular and effective, and will continue to morph and take advantage of new vulnerabilities and therefore will continue to be a risk no matter what you do. The good news is that because these attacks are so popular there are many ways to defend yourself or your organization against these attacks without having to jump through a lot of hoops.

Deploy Network Edge Strategies

The network edge is both your first and last line of defense against attacks using active content such as macros and ActiveX. To understand this, you need to think about how the malicious content can get into your network and how it can deliver any payload back out of it. In one sense, these attacks are passive in nature because the attacker is not actively attacking a specific target but instead, the attacker is relying on some action taken by an unsuspecting user to activate the attack.

Malicious content must pass through the network edge to get to where it can be activated, so this is where you build the first line of defense that was discussed in the section “Using AntiVirus and AntiMalware.” In many cases, the mechanism for delivery of Office documents with malicious content is through e-mail and therefore, it is possible to use your e-mail server to employ defensive strategies to prevent the content from ever getting into the hands of a user. Besides scanning for viruses, e-mail servers can filter for tip-offs such as mismatched headers or malicious sources based on blacklists. They can also be set to only allow plain text e-mails (which wouldn't effect attachments, but does kill all active content within the e-mails themselves).

From an outbound perspective, edge strategies are employed to ensure that the malicious content that has been executed within your environment can't actually deliver any value to the attacker. These strategies are based on filtering the data as it tries to leave your network and can include implementing egress filtering on firewalls, or deploying an application layer gateway or a data loss prevention (DLP) solution. In each of these cases, the traffic from your internal network is scanned as it attempts to cross the network boundary and is allowed or disallowed (or possibly quarantined) based on the policies/rule set you have defined.

Using Antivirus and Antimalware

You should install Antivirus and Antimalware software at all layers of your environment to ensure that viruses and malware are detected and neutralized. This includes integration with the border devices, with e-mail servers, and on an end-user device. The reason you need this at all layers is to eliminate the threat from your network as soon as possible, but not all traffic can be scanned at each layer.

For example, let's say your friend knows you enjoy collecting Star Wars action figures and he wants to send you a picture that he had found in an ad for the last one you need for your collection. Since he knows that your company monitors your e-mail, he decides to encrypt the file and names it something generic to circumvent your e-mail filters. Unfortunately, this action means that the content of the encrypted file won't be scanned until someone opens it rather than it being detected at network edge. Therefore, it is vital that scanning occurs at whatever point the mail is opened.

In addition to layering protection throughout the network, controls should also be configured to ensure that viruses are detected before they can actually run. To accomplish this, antivirus and antimalware software should be set to use heuristics as well as the specific virus/malware signatures in the files. The software should also always have real-time scanning enabled as well as a full scan of the hard drive should be performed at least once a week. Using all of these options is a trade-off because it does take more processor cycles to use your antivirus and antimalware software in this manner, but in almost all cases it is worth it.

Update Frequently

Like Windows, Office applications sometimes have vulnerabilities and these vulnerabilities are patched through updates. Updates to Office applications should either be downloaded and installed automatically on each individual machine or downloaded and integrated into whatever patching process you have within your environment. Windows Update allows for both Windows and Office patches to be downloaded at the same time and this option is available for all versions of Office newer than Office XP.

Even more important than keeping Office up-to-date is to keep your antivirus and antimalware signatures as current as possible. This software should be set to automatically download and install new signature files as soon as they are released (although establishing an internal site that updates from the manufacturer rather than having each computer download individually is a good strategy for accomplishing this). In their infancy, antivirus signature files did sometimes cause issues with computer systems and therefore testing was needed before deploying these files. However, this occurrence is now so rare that the risk associated with not using the newest signatures far outweighs the risk that a signature file will cause a problem on your systems.

Using Office Security Settings

Regardless of the version or type of Office application you are using, there are security settings that control how the application deals with active content and you should use these to ensure the security of your computer. In older versions of Office programs, the default settings generally allow all active contents to run, which is an issue from a security perspective. Microsoft has changed this philosophy in recent years, so the defaults for the newer versions are much more restrictive (but can be annoying to end-users because they tend to be set to ask for permission before running the content).

Epic Fail

Oversecuring an environment inevitably leads to undersecuring. Many companies pick the most restrictive settings possible when implementing security into their Office applications. Unfortunately, this usually causes issues with people not being able to do their work. When security settings impact the business, leaders rarely have the stomach for taking the time to tweak the security to get it to the right level and instead demand the application be allowed to run with the lowest security settings possible. Of course, this opens the business up to all kinds of attacks over the long term. Some of these attacks vectors would never have been available if a more reasonable security approach had been taken.

The security settings are separate for each Office application and are accessed through the menus of the particular Office application you are trying to secure. Prior to Office 2007, these menus are generally located through the “Tools” menu and are relatively easy to find. Office 2007 restructured the interface and relocated the security settings into an area named the “Trust Center” (shown in Figure 5.4), but made it much more difficult to get the settings.

Which of these is a common method for bypassing application layer proxy firewalls

FIGURE 5.4. Microsoft Word Trust Center

To access the Trust Center in Office 2007 applications, you must open the general menu by clicking on the Office symbol in the top left-hand corner of the application. This will open up a menu that has a small button in the bottom right-hand corner that says “Word Options” (or “Excel Options,” “Access Options,” etc.… depending upon the application). After clicking on the Options button, the Options menu is brought up and you will select Trust Center from the context menu on the left side of the screen. This will bring up information in the right-hand pane, but not the Trust Center itself. The last step is to locate and click the Trust Center Settings… button within the right pane, which will bring up the menu shown in Figure 5.4.

All of the Office applications have the same security setting options from a general perspective, but they are not exactly the same. For example, Excel has an additional option for “External Content” that other Office products (such as Word and PowerPoint) do not. Table 5.1 discusses each of the menus within the Trust Center and what they are used for from a general perspective. Additional information about Trust Center can be obtained from Microsoft's Web site.B

Table 5.1. Trust center options

MenuUse and options description
Trusted publishers Contains a list of Certificate Authorities that the office application should trust for digital signing
Trusted locations Contains a list of paths that the office application should trust when opening files. By default, this only includes the locations for templates and add-ins from Microsoft. This list affects how Office operates based on other settings within the Trust Center menu, and adding the locations where you keep your documents will weaken the security of your computer
Add-ins A list of options you can choose for how the Office application deals with add-ins This list generally includes options for disabling all applications add-ins requiring digital signatures by a trusted publisher for any add-ins and for disabling user notification when Office stops an unsigned add-in from running
ActiveX settings Provides different options for how Office deals with ActiveX controls for all documents stored in locations not in the Trusted Locations list. By default, this is set to prompt the user before enabling ActiveX controls with minimal restrictions
 Also provides an option for always running in “safe mode”
Macro settings Provides different options for how Office deals with ActiveX controls for all documents stored in locations not in the Trusted Locations list. By default, this is set to disable all macros with notification
 Also provides an option to trust access to the VBA project object model
Message bar Provides options for whether the Message Bar shows within Office
External content (Excel only) Provides different options for securing data connections and links within an Excel workbook
Privacy options Provides options related to the Office online, including checking Office documents that are from, or link to, suspicious Web sites as determined by Microsoft
 Also provides an option for bringing up the Document Inspector that searches for hidden content within a document

Office 2007 defaults attempt to strike a balance between security and usability. It allows you to manage all of the Trust Center settings through Group Policy, if you are in a domain environment. For earlier versions of Office, you should go through the security options within the Tools menu and determine which settings are necessary within your environment.

Working Smart

In one of the earlier tips in the chapter, the importance of training end users to work smart in regards to the security of their computers was discussed. Working smart includes understanding the basic security processes everyone should use when dealing with their computer. An obvious example would be to delete the spam e-mail promising you “more powerful orgasms” before opening the virus.exe attachment that came with it. Almost everyone who sees an e-mail like this would immediately delete it; however, just scrolling past an e-mail in Outlook with malicious code imbedded may execute the code even if you don't intend to open it.

Rule #1 for working smart is to think before you click on something. We generally think of this in relation to visiting a Web site, but applying the same thought process can be beneficial when working with Office because of the amount of active content currently being used in these applications. A large percentage of the e-mails, documents, and spreadsheets people share with each other include some embedded links or buttons which may redirect you to a Web site or run some macro. Take a second and ask yourself whether you have ever opened the document before, then run a virus scan against any documents before you open them for the first time (most virus scanners place a “scan” option in the menu that appears when you right-click on a file).

Also, consider whether you trust the source where you got the document. Did you download it from a legitimate Web site like Microsoft.com or was it something you found as you were searching for a free MP3 of the newest “Weird Al” song? Did you ask your boss to post a document you needed on your group's SharePoint site or did someone just randomly e-mail it to you with a sort of suspicious subject line? Always think twice before making a decision to click on something that may cause security issues.

If you take a second to think about where the document came from, and whether you actually trust that source, then you can take actions before opening the document. If it came to you out of the blue from someone, then confirm that they sent it to you by calling or sending them an e-mail (make sure it is a new e-mail because opening the questionable e-mail to reply “Did you send this to me?” defeats the purpose). When in doubt, you should always check with your network administrators or security staff before doing anything you suspect; otherwise, it may reduce the security of your network.

Finally, it is incredibly important to take a second to consider whether to allow something to happen on your computer when Office or Windows pops up a box asking you whether you want something to run. This is the last line of defense and working smart means you consider whether you are actually asking for something to happen before that permission box appears or if something is happening in the background without your knowledge.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495516000054

Publishing Exchange 2007

Fergus Strachan, in Integrating ISA Server 2006 with Microsoft Exchange 2007, 2008

The Benefits of ISA Server 2006

ISA Server 2006 is an integrated security gateway that helps protect company networks from external threats while providing authorized users with access to internal resources.

Defend against Internet threats ISA Server helps protect the company network with a hybrid proxy-firewall architecture, packet inspection and verification, granular policies, and monitoring and alerting capabilities. Standard firewall rule sets combine with packet inspection to filter which ports traffic can enter and what kind of traffic, and whether it must be authenticated to the internal network beforehand.

Connect and secure branch offices ISA Server can be used to integrate company branch offices by combining site-to-site VPNs, content caching, and HTTP compression with its application layer filtering capability.

Securely publish internal resources ISA Server is an intelligent, application-layer gateway that can securely publish information such as Web applications, Exchange Server, and any other internal resource users need access to from the Internet or from other company sites. Using pre-authentication and packet inspection, it prevents unauthorized data from entering the network and drastically reduces the risk of intrusion.

ISA Server 2006 is one of the most secure firewalls out there. It has been approved for certification of Common Criteria Evaluation Assurance Level 4+, which is the highest level mutually approved by all participating countries. Since ISA Server 2004 came out there have been no security bulletins issued for ISA Server 2004 and ISA Server 2006, and instances of ISA servers being compromised in the wild are extremely rare if they exist at all.

Web Publishing Rules

The main feature of ISA Server 2006, certainly from the point of view of Exchange Server, is its capability to securely publish Web sites and other servers to the Internet. When used as a reverse-proxy server in this way, it has a whole host of tools to bring to bear when it comes to providing access to internal services while preventing unauthorized access and attacks from the Internet.

ISA Server Web publishing rules are, among other things, used for publishing Exchange services such as Outlook Anywhere, Outlook Web Access, and Exchange ActiveSync. Some of the features of ISA Server Web publishing rules relevant to publishing Exchange are:

Reverse-proxy access to sites When publishing, or reverse-proxying sites on internal servers, ISA Server's Web proxy filter deconstructs and reconstructs client to server communication. In contrast to many hardware-based firewalls, this allows ISA to perform inspection on the traffic, even SSL-encrypted traffic, to check the validity and harmfulness of the packets. It also allows for reverse Web caching and other features.

Pre-authentication of users ISA Server's pre-authentication feature allows you to ensure that any traffic reaching your internal servers has been authenticated and authorized by your access policy. This helps stop attacks on your servers based on unauthenticated connections using known weaknesses in server applications such as IIS, the likes of which are being patched on a monthly basis. When publishing Exchange Web protocols, OWA, Outlook Anywhere, and ActiveSync connections can be authorized at the ISA server before being let in to the Exchange Client Access servers. Using delegation of user credentials, the ISA server can then authenticate with the Exchange server on the user's behalf, preventing the need for the user to log in twice. If the delegation fails (the Exchange server rejects the connection), the client connection is dropped at the ISA server.

RADIUS and LDAP authentication methods In situations where you don't want to make the ISA server a member of the domain—for example, when it is the Internet-facing firewall in a back-to-back arrangement—you can still authenticate users against the Active Directory database by using RADIUS or LDAP. RADIUS is a standard authentication protocol across the industry and is the default method for many devices and operating systems, and so may be the best option where a RADIUS infrastructure and policy is already in place. However, LDAP authentication, which is new in ISA Server 2006, gives you benefits beyond RADIUS:

There is no additional component to install (such as IAS on a Windows server)—it works against a DC out of the box since AD is an LDAP database.

LDAP can leverage AD groups for authorization, unlike RADIUS.

ISA Server 2006 has also added a second two-factor authentication protocol—RADIUS One Time Password (OTP).

Delegation of user credentials Pre-authenticating users is a great feature of ISA Server, but if the published server also requires authentication it's not great if the user is faced with a prompt for credentials twice. ISA Server can answer the published server's call for credentials by forwarding the credentials obtained through pre-authentication to the server in question. There are a number of methods by which ISA can send these credentials, and we discuss some of them in this chapter (Figure 4.2).

Which of these is a common method for bypassing application layer proxy firewalls

Figure 4.2. Delegation Can Happen Using a Number of Methods

Single Sign On ISA Server allows you to specify that once a client has authenticated with ISA to access a service within the network, it can access further services within the same network without having to re-enter credentials. For example, if a user accessing Outlook Web Access on https://mail.lochwallace.com/owa and then opens a page to the published SharePoint site on https://sharepoint.lochwallace.com/, ISA Server can seamlessly use the user's already provided credentials to authenticate him with the SharePoint server, and the user isn't asked for his username and password. For this to work, single sign on must be configured on the ISA server for *.lochwallace.com, and both sites must be published using the same listener.

Application-layer inspection ISA Server's HTTP Security Filter is used to inspect traffic going across its border and apply rules on the HTTP traffic. Almost every aspect of HTTP communication can be specified using this filter, from maximum payload length, HTTP methods/verbs, file extensions, Request and Response headers, and executable content (Figure 4.3). Using SSL bridging, ISA decrypts, inspects, and then encrypts traffic destined to internal servers. Because of this bridging, it is also possible to redirect SSL traffic to an FQDN not specified in the original traffic. This HTTP filtering is done on a per-rule basis.

Which of these is a common method for bypassing application layer proxy firewalls

Figure 4.3. Many Filter Settings Can Be Applied to a Rule

Path redirection ISA Server allows you to redirect connections to different paths and even different internal servers based on the path in the URL. For example, requests to https://mail.lochwallace.com/owa and https://mail.lochwallace.com/Autodiscover come into the same IP address on the ISA server, but can be redirected to different internal Exchange servers. In addition, a Web publishing rule can redirect traffic to a different URL directly, as in the Autodiscover example later in the chapter where requests to https://autodiscover.lochwallace.com/ are redirected to https://mail.lochwallace.com/.

Port and protocol redirection Using port redirection, ISA Server can listen to incoming requests on port 80, for example, and forward the traffic to a different port on the internal Web server. This is useful if the internal server is published on a nonstandard port but you want to publish it to Internet users using standard port 80. It is also the method by which you terminate an incoming SSL connection at the ISA server and forward the traffic unencrypted (Figure 4.4).

Which of these is a common method for bypassing application layer proxy firewalls

Figure 4.4. Port and Protocol Translation Is Set on the Bridging Tab

HTTP to FTP protocol redirection is supported by the Web publishing rules. This enables FTP sites to be published using Web publishing rules, as it transforms an HTTP GET command to an FTP GET command when it flows across the firewall.

Rule scheduling ISA Server allows you to put schedules on when users are allowed to access resources published through the publishing rules. This is useful if you want workers accessing sites only during working hours, or high-bandwidth applications you want people to access only during network “trough” times (Figure 4.5).

Which of these is a common method for bypassing application layer proxy firewalls

Figure 4.5. You Can Set Exactly when to Allow Access to Resources

Multiple Web site publishing through a single IP address ISA Server can publish multiple Web sites on a single IP address by inspecting the host header in the packet and applying the rule that corresponds to the Web site requested. For example, to publish webserver.lochwallace.com and mail.lochwallace.comon the same IP address, you can simply create twp publishing rules and apply the same Web listener to both. Each rule specifies the Public Name of the server it is publishing, and as long as the name is in the request's host header, ISA server will pick it up and send it to the correct internal server (Figure 4.6).

Which of these is a common method for bypassing application layer proxy firewalls

Figure 4.6. Publishing Multiple Web Sites

Public DNS must resolve the FQDNs of the published servers to the external IP address of the ISA server. It is easy to publish any number of servers this way using the same IP address.

Link translation Internal servers often publish links to other URLs using the NetBIOS name of the server since they are geared toward internal communication. External users accessing this information will receive broken links when they try to follow them since they need an FQDN to traverse the Internet through the ISA server and to the published server (Figure 4.7).

Which of these is a common method for bypassing application layer proxy firewalls

Figure 4.7. Global Link Direction Mappings

Link mappings can be set globally or on a per-rule basis.

Typical ISA Server Configurations

There are a number of particular situations and configurations in which ISA Server 2006 can be used in corporate networks. Because of its comprehensive feature set, ISA can function as a simple firewall, a publishing and filtering proxy server, a branch office VPN device, and many others. In terms of publishing Exchange Server 2007 we are interested mainly in its firewall and proxying functionality and therefore in the Internet-facing configurations set out here.

Many small companies have a single gateway to the Internet comprising one security device between the company network and the Internet. Often this is a Small Business Server where the ISA software is on the main server itself. Preferably, from a security point of view, the ISA server should be a different box, and larger companies may configure it this way. Corporates, on the other hand, will undoubtedly have at least one perimeter network where some servers that are published to the Internet lie. In these situations, they may deploy ISA Server as a back-end firewall between the corporate network and a DMZ, and as a member of the domain to provide increased security and flexibility.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492751000047

What is application proxy firewall?

An application-proxy firewall is a server program that understands the type of information being transmitted—for example, HTTP or FTP. It functions at a higher level in the protocol stack than do packet-filtering firewalls, thus providing more opportunities for the monitoring and control of accessibility.

What type of firewall is also known as a proxy server?

A proxy firewall is also be called an application firewall or gateway firewall. A proxy firewall is also a proxy server, but not all proxy servers are proxy firewalls. A proxy server acts as an intermediary between clients and servers.

What are the 3 types of firewalls?

Based on their method of operation, there are four different types of firewalls..
Packet Filtering Firewalls. Packet filtering firewalls are the oldest, most basic type of firewalls. ... .
Circuit-Level Gateways. ... .
Stateful Inspection Firewalls. ... .
Application-Level Gateways (Proxy Firewalls).

Which type of firewall makes use of a proxy server to connect to remote servers on behalf of clients?

The proxy firewall protects the internal system from outside network invaders and prohibits direct connections between the local network and the internet. Proxy firewall, as noted previously, uses packet filtering proxy servers to gather relevant information at the application layer.