FreeBSD 8.1/pf performance

Jan Dušátko jan at dusatko.org
Fri Nov 5 09:00:43 CET 2010


Ahoj,
Mam FreeBSD 8.1 a pf jako paketovy filtr. Narazil jsem na problem, ktery mne
dvakrat nepotesil, proto se chci zeptat, jake mate zkusenosti se soucasnymi
paketovymi filtry.

Dve intel sitovky v paru tvori lagg, virtualni failover interface:
ifconfig_em0="rxcsum txcsum tso lro up"
ifconfig_em1="rxcsum txcsum tso lro up"
ifconfig_lagg0="laggproto failover laggport em0 laggport em1 <IP_ADDRESS>
netmask <MASK>"

na tomto interface je poveseny stavovy packet filter, kde dovnitr je zakaz s
vyjimkou portu 22 a 80. Ven je povoleno vse.

pass in quick on $ext_if proto tcp to ($ext_if) port { 22 } keep state
pass in quick on $ext_if proto tcp to port { 80, 443 } modulate state

V uvedene konfiguraci dokaze system obhodpodarit cca 2000-3000 klientu (cca
10000-15000 packet/sec)

--------

Pokud spustim jeste fronty a nahodne rozdelovani  (duvodem je vysoke
vytezovani linky nekterymi klienty), vykonnost system poklesne a dokaze
obhospodarit cca 1000-1500 klientu (cca 3000-5000 packet/sec)

altq on $ext_if cbq bandwidth 1Gb qlimit 65535 queue {q1, q2, q3, q4, ssh}
      queue q1     bandwidth 150Mb
      queue q2     bandwidth 120Mb
      queue q3     bandwidth 100Mb
      queue q4     bandwidth  75Mb
      queue ssh    bandwidth   50% cbq(borrow)

pass in quick on $ext_if proto tcp to ($ext_if) port { 22 } keep state
pass in quick on $ext_if proto tcp to port { 80, 443 } modulate state
(source-track, max-src-conn 64, max-src-conn-rate 16/1)

pass out on $ext_if proto tcp from port { 80, 443 } modulate state queue q1
probability 10%
pass out on $ext_if proto tcp from port { 80, 443 } modulate state queue q2
probability 20%
pass out on $ext_if proto tcp from port { 80, 443 } modulate state queue q3
probability 30%
pass out on $ext_if proto tcp from port { 80, 443 } modulate state queue q4
probability 40%
pass out on $ext_if proto tcp from port { 22 }           modulate state
queue ssh

--------

Vtip nastane, pokud vypnu pf. V takovem okamziku se bavim o obsluze cca 20
000 klientu nez narazim na strop aplikace a zadne problem s timeouty.

Soucasny system:
FreeBSD s1 8.1-RELEASE FreeBSD 8.1-RELEASE #1: Thu Sep 16 15:09:54 CEST 2010
root na s1:/usr/obj/usr/src/sys/s1  amd64

em0 na pci0:3:0:0: class=0x020000 card=0x115e8086 chip=0x105e8086 rev=0x06
hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'HP NC360T PCIe DP Gigabit Server Adapter (n1e5132)'
    class      = network
    subclass   = ethernet
em1 na pci0:3:0:1: class=0x020000 card=0x115e8086 chip=0x105e8086 rev=0x06
hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'HP NC360T PCIe DP Gigabit Server Adapter (n1e5132)'
    class      = network
    subclass   = ethernet

cupid
.
 "Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz"
.

Velikost pameti - 24GB

Dan navrhoval vypnout stavove chovani firewallu, udrzba tabulek zabere velke
mnozstvi casu. Ale dle meho je rozdil jednoho radu prilis mnoho. Na webu
jsem nasel informace o problemech 8.1 vs paketove filtry, nevim, zda se
jedna ciste jenom o pf nebo I o jine, jako je ipfw nebo iptable.
 
Dalsi zalezitosti je parovani sitovek pod vysokou zatezi. Z praxe vim, ze
parovani interface casto znamena vykon dolu, nekdy az na tretinu puvodniho
(skutecne), ale je to "vykoupeno" moznosti vyssi dostupnosti (tedy pokud je
kazda sitovka v jenom switchi). Nemam zmeren dopad na vykonnost u
lagg/bridge/carp interface pod FreeBSD, takze uvedena procenta beru jako
pesimistickou variantu.

Dekuji za odpovedi nebo za napady, jak tento problem vyresit.

Honza



More information about the Users-l mailing list