SQUID+SQUIDGUARD

SQUID+SQUIDGUARD - Sécurité - Systèmes & Réseaux Pro

Marsh Posté le 14-12-2010 à 14:12:23    

Bien le bonjour,
 
Je suis un nouveau aujourd'hui, je regarder souvent ce site afin de m'aider a overclocker mes processeurs, mais j'ai vue que vous parlez aussi de réseau.
Je me présente , Maxime 20 ans en dernière année de BAC SEN ( Systèmes , Électronique , Numériques ) option télécommunication réseau.
Dans ma formation il me faut faire deux stage en entreprise , mon première je l'ai effecteur chez un assembleur , et mon deuxième actuellement dans une structure publique ( mairie ).
Je me permet de vous solliciter afin de demander un petit peu d'aide sur la configuration de mon serveur.
En effet la maire ou je suis en stage me demande de sécuriser un maximum leurs réseau.
Donc j'ai déjà fais deux serveur un IPCOP et un ALCASAR sous Linux et j'ai fais des vlans niveau 3 avec leurs switch.
Maintenant j'aimerais crée un proxy (SQUID) et un filtre web (SQUIDGUARD) toujours sous linux.
Pour se faire j'ai essayer plusieurs distribution ( redhate, debian , opensuse , et ubuntu )  
J'ai donc choisis de le réaliser sur UBUNTU.
Donc pour le proxy pas de soucis cela fonctionne, mais pour mon SQUIDGUARD (le filtre) rien a faire il ne filtre rien du tous j'ai tous essayer modification de script etc...
Je suis vraiment perdu parce que moi même je ne comprends pas pourquoi cela ne fonctionne pas.
J'aimerais savoir si qu'elle qu'un a déjà utiliser cette outils la et comment a t'il fais pour le configurer .
Je vous remercie d'avance et veuillez excusez mes fautes d'orthographe.
 
PS: a la fin de mes deux stage il faut que je choisis qu'est qui ma plus plu. Et je préfère le HARDWARE car on est pas tous le temps enfermer dans un bureau et on est au contact du client , meme si le travail est un petit peu moin varié que le réseau je préfère nettement cela. Est ce normal ?

Reply

Marsh Posté le 14-12-2010 à 14:12:23   

Reply

Marsh Posté le 14-12-2010 à 14:29:41    

je vous joins ainsi mon script SQUID.conf mais j'ai pas le droit de vous fournir les plages d'adresses :
 
 
# WELCOME TO SQUID 2.7.STABLE9
# ----------------------------
#
# This is the default Squid configuration file. You may wish
# to look at the Squid home page (http://www.squid-cache.org/)
# for the FAQ and other documentation.
#
# The default Squid config file shows what the defaults for
# various options happen to be.  If you don't need to change the
# default, you shouldn't uncomment the line.  Doing so may cause
# run-time problems.  In some cases "none" refers to no default
# setting at all, while in other cases it refers to a valid
# option - the comments for that keyword indicate if this is the
# case.
#
 
 
#  Configuration options can be included using the "include" directive.
#  Include takes a list of files to include. Quoting and wildcards is
#  supported.
#
#  For example,
#
#  include /path/to/included/file/squid.acl.config
#
#  Includes can be nested up to a hard-coded depth of 16 levels.
#  This arbitrary restriction is to prevent recursive include references
#  from causing Squid entering an infinite loop whilst trying to load
#  configuration files.
 
 
# OPTIONS FOR AUTHENTICATION
# -----------------------------------------------------------------------------
 
#  TAG: auth_param
# This is used to define parameters for the various authentication
# schemes supported by Squid.
#
# format: auth_param scheme parameter [setting]
#
# The order in which authentication schemes are presented to the client is
# dependent on the order the scheme first appears in config file. IE
# has a bug (it's not RFC 2617 compliant) in that it will use the basic
# scheme if basic is the first entry presented, even if more secure
# schemes are presented. For now use the order in the recommended
# settings section below. If other browsers have difficulties (don't
# recognize the schemes offered even if you are using basic) either
# put basic first, or disable the other schemes (by commenting out their
# program entry).
#
# Once an authentication scheme is fully configured, it can only be
# shutdown by shutting squid down and restarting. Changes can be made on
# the fly and activated with a reconfigure. I.E. You can change to a
# different helper, but not unconfigure the helper completely.
#
# Please note that while this directive defines how Squid processes
# authentication it does not automatically activate authentication.
# To use authentication you must in addition make use of ACLs based
# on login name in http_access (proxy_auth, proxy_auth_regex or
# external with %LOGIN used in the format tag). The browser will be
# challenged for authentication on the first such acl encountered
# in http_access processing and will also be re-challenged for new
# login credentials if the request is being denied by a proxy_auth
# type acl.
#
# WARNING: authentication can't be used in a transparently intercepting
# proxy as the client then thinks it is talking to an origin server and
# not the proxy. This is a limitation of bending the TCP/IP protocol to
# transparently intercepting port 80, not a limitation in Squid.
#
# === Parameters for the basic scheme follow. ===
#
# "program" cmdline
# Specify the command for the external authenticator.  Such a program
# reads a line containing "username password" and replies "OK" or
# "ERR" in an endless loop. "ERR" responses may optionally be followed
# by a error description available as %m in the returned error page.
#
# By default, the basic authentication scheme is not used unless a
# program is specified.
#
# If you want to use the traditional proxy authentication, jump over to
# the helpers/basic_auth/NCSA directory and type:
#  % make
#  % make install
#
# Then, set this line to something like
#
# auth_param basic program /usr/lib/squid/ncsa_auth /usr/etc/passwd
#
# "children" numberofchildren
# The number of authenticator processes to spawn. If you start too few
# squid will have to wait for them to process a backlog of credential
# verifications, slowing it down. When credential verifications are
# done via a (slow) network you are likely to need lots of
# authenticator processes.
# auth_param basic children 5
#
# "concurrency" numberofconcurrentrequests
# The number of concurrent requests/channels the helper supports.
# Changes the protocol used to include a channel number first on
# the request/response line, allowing multiple requests to be sent
# to the same helper in parallell without wating for the response.
# Must not be set unless it's known the helper supports this.
#
# "realm" realmstring
# Specifies the realm name which is to be reported to the client for
# the basic proxy authentication scheme (part of the text the user
# will see when prompted their username and password).
# auth_param basic realm Squid proxy-caching web server
#
# "credentialsttl" timetolive
# Specifies how long squid assumes an externally validated
# username:password pair is valid for - in other words how often the
# helper program is called for that user. Set this low to force
# revalidation with short lived passwords.  Note that setting this high
# does not impact your susceptibility to replay attacks unless you are
# using an one-time password system (such as SecureID). If you are using
# such a system, you will be vulnerable to replay attacks unless you
# also use the max_user_ip ACL in an http_access rule.
# auth_param basic credentialsttl 2 hours
#
# "casesensitive" on|off
# Specifies if usernames are case sensitive. Most user databases are
# case insensitive allowing the same username to be spelled using both
# lower and upper case letters, but some are case sensitive. This
# makes a big difference for user_max_ip ACL processing and similar.
# auth_param basic casesensitive off
#
# "blankpassword" on|off
# Specifies if blank passwords should be supported. Defaults to off
# as there is multiple authentication backends which handles blank
# passwords as "guest" access.
#
# === Parameters for the digest scheme follow ===
#
# "program" cmdline
# Specify the command for the external authenticator.  Such a program
# reads a line containing "username":"realm" and replies with the
# appropriate H(A1) value hex encoded or ERR if the user (or his H(A1)
# hash) does not exists.  See RFC 2616 for the definition of H(A1).
# "ERR" responses may optionally be followed by a error description
# available as %m in the returned error page.
#
# By default, the digest authentication scheme is not used unless a
# program is specified.
#
# If you want to use a digest authenticator, jump over to the
# helpers/digest_auth/ directory and choose the authenticator to use.
# It it's directory type
#  % make
#  % make install
#
# Then, set this line to something like
#
# auth_param digest program /usr/lib/squid/digest_auth_pw /usr/etc/digpass
#
# "children" numberofchildren
# The number of authenticator processes to spawn. If you start too few
# squid will have to wait for them to process a backlog of credential
# verifications, slowing it down. When credential verifications are
# done via a (slow) network you are likely to need lots of
# authenticator processes.
# auth_param digest children 5
#
# "concurrency" numberofconcurrentrequests
# The number of concurrent requests/channels the helper supports.
# Changes the protocol used to include a channel number first on
# the request/response line, allowing multiple requests to be sent
# to the same helper in parallell without wating for the response.
# Must not be set unless it's known the helper supports this.
#
# "realm" realmstring
# Specifies the realm name which is to be reported to the client for the
# digest proxy authentication scheme (part of the text the user will see
# when prompted their username and password).
# auth_param digest realm Squid proxy-caching web server
#
# "nonce_garbage_interval" timeinterval
# Specifies the interval that nonces that have been issued to clients are
# checked for validity.
# auth_param digest nonce_garbage_interval 5 minutes
#
# "nonce_max_duration" timeinterval
# Specifies the maximum length of time a given nonce will be valid for.
# auth_param digest nonce_max_duration 30 minutes
#
# "nonce_max_count" number
# Specifies the maximum number of times a given nonce can be used.
# auth_param digest nonce_max_count 50
#
# "nonce_strictness" on|off
# Determines if squid requires strict increment-by-1 behavior for nonce
# counts, or just incrementing (off - for use when useragents generate
# nonce counts that occasionally miss 1 (ie, 1,2,4,6)).
# auth_param digest nonce_strictness off
#
# "check_nonce_count" on|off
# This directive if set to off can disable the nonce count check
# completely to work around buggy digest qop implementations in certain
# mainstream browser versions. Default on to check the nonce count to
# protect from authentication replay attacks.
# auth_param digest check_nonce_count on
#
# "post_workaround" on|off
# This is a workaround to certain buggy browsers who sends an incorrect
# request digest in POST requests when reusing the same nonce as acquired
# earlier in response to a GET request.
# auth_param digest post_workaround off
#
# === NTLM scheme options follow ===
#
# "program" cmdline
# Specify the command for the external NTLM authenticator. Such a
# program participates in the NTLMSSP exchanges between Squid and the
# client and reads commands according to the Squid NTLMSSP helper
# protocol. See helpers/ntlm_auth/ for details. Recommended ntlm
# authenticator is ntlm_auth from Samba-3.X, but a number of other
# ntlm authenticators is available.
#
# By default, the ntlm authentication scheme is not used unless a
# program is specified.
#
# auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
#
# "children" numberofchildren
# The number of authenticator processes to spawn. If you start too few
# squid will have to wait for them to process a backlog of credential
# verifications, slowing it down. When credential verifications are
# done via a (slow) network you are likely to need lots of
# authenticator processes.
# auth_param ntlm children 5
#
# "keep_alive" on|off
# This option enables the use of keep-alive on the initial
# authentication request. It has been reported some versions of MSIE
# have problems if this is enabled, but performance will be increased
# if enabled.
#
# auth_param ntlm keep_alive on
#
# === Negotiate scheme options follow ===
#
# "program" cmdline
# Specify the command for the external Negotiate authenticator. Such a
# program participates in the SPNEGO exchanges between Squid and the
# client and reads commands according to the Squid ntlmssp helper
# protocol. See helpers/ntlm_auth/ for details. Recommended SPNEGO
# authenticator is ntlm_auth from Samba-4.X.
#
# By default, the Negotiate authentication scheme is not used unless a
# program is specified.
#
# auth_param negotiate program /path/to/samba/bin/ntlm_auth --helper-protocol=gss-spnego
#
# "children" numberofchildren
# The number of authenticator processes to spawn. If you start too few
# squid will have to wait for them to process a backlog of credential
# verifications, slowing it down. When credential verifications are
# done via a (slow) network you are likely to need lots of
# authenticator processes.
# auth_param negotiate children 5
#
# "keep_alive" on|off
# If you experience problems with PUT/POST requests when using the
# Negotiate authentication scheme then you can try setting this to
# off. This will cause Squid to forcibly close the connection on
# the initial requests where the browser asks which schemes are
# supported by the proxy.
#
# auth_param negotiate keep_alive on
#
#Recommended minimum configuration per scheme:
#auth_param negotiate program <uncomment and complete this line to activate>
#auth_param negotiate children 5
#auth_param negotiate keep_alive on
#auth_param ntlm program <uncomment and complete this line to activate>
#auth_param ntlm children 5
#auth_param ntlm keep_alive on
#auth_param digest program <uncomment and complete this line>
#auth_param digest children 5
#auth_param digest realm Squid proxy-caching web server
#auth_param digest nonce_garbage_interval 5 minutes
#auth_param digest nonce_max_duration 30 minutes
#auth_param digest nonce_max_count 50
#auth_param basic program <uncomment and complete this line>
#auth_param basic children 5
#auth_param basic realm Squid proxy-caching web server
#auth_param basic credentialsttl 2 hours
#auth_param basic casesensitive off
 
#  TAG: authenticate_cache_garbage_interval
# The time period between garbage collection across the username cache.
# This is a tradeoff between memory utilization (long intervals - say
# 2 days) and CPU (short intervals - say 1 minute). Only change if you
# have good reason to.
#
#Default:
# authenticate_cache_garbage_interval 1 hour
 
#  TAG: authenticate_ttl
# The time a user & their credentials stay in the logged in user cache
# since their last request. When the garbage interval passes, all user
# credentials that have passed their TTL are removed from memory.
#
#Default:
# authenticate_ttl 1 hour
 
#  TAG: authenticate_ip_ttl
# If you use proxy authentication and the 'max_user_ip' ACL, this
# directive controls how long Squid remembers the IP addresses
# associated with each user.  Use a small value (e.g., 60 seconds) if
# your users might change addresses quickly, as is the case with
# dialups. You might be safe using a larger value (e.g., 2 hours) in a
# corporate LAN environment with relatively static address assignments.
#
#Default:
# authenticate_ip_ttl 0 seconds
 
#  TAG: authenticate_ip_shortcircuit_ttl
# Cache authentication credentials per client IP address for this
# long. Default is 0 seconds (disabled).
#
# See also authenticate_ip_shortcircuit_access directive.
#
#Default:
# authenticate_ip_shortcircuit_ttl 0 seconds
 
 
# ACCESS CONTROLS
# -----------------------------------------------------------------------------
 
#  TAG: external_acl_type
# This option defines external acl classes using a helper program to
# look up the status
#
#   external_acl_type name [options] FORMAT.. /path/to/helper [helper arguments..]
#
# Options:
#
#   ttl=n  TTL in seconds for cached results (defaults to 3600
#   for 1 hour)
#   negative_ttl=n
#   TTL for cached negative lookups (default same
#   as ttl)
#   children=n number of processes spawn to service external acl
#   lookups of this type. (default 5).
#   concurrency=n concurrency level per process. Only used with helpers
#     capable of processing more than one query at a time.
#   Note: see compatibility note below
#   cache=n result cache size, 0 is unbounded (default)
#   grace= Percentage remaining of TTL where a refresh of a
#   cached entry should be initiated without needing to
#   wait for a new reply. (default 0 for no grace period)
#   protocol=2.5  Compatibility mode for Squid-2.5 external acl helpers
#
# FORMAT specifications
#
#   %LOGIN Authenticated user login name
#   %EXT_USER Username from external acl
#   %IDENT Ident user name
#   %SRC  Client IP
#   %SRCPORT Client source port
#   %URI  Requested URI
#   %DST  Requested host
#   %PROTO Requested protocol
#   %PORT  Requested port
#   %METHOD Request method
#   %MYADDR Squid interface address
#   %MYPORT Squid http_port number
#   %PATH  Requested URL-path (including query-string if any)
#   %USER_CERT SSL User certificate in PEM format
#   %USER_CERTCHAIN SSL User certificate chain in PEM format
#   %USER_CERT_xx SSL User certificate subject attribute xx
#   %USER_CA_xx SSL User certificate issuer attribute xx
#   %{Header} HTTP request header "Header"
#   %{Hdr:member} HTTP request header "Hdr" list member "member"
#   %{Hdr:;member}
#   HTTP request header list member using ; as
#   list separator. ; can be any non-alphanumeric
#   character.
#  %ACL  The ACL name
#  %DATA  The ACL arguments. If not used then any arguments
#   is automatically added at the end
#
# In addition to the above, any string specified in the referencing
# acl will also be included in the helper request line, after the
# specified formats (see the "acl external" directive)
#
# The helper receives lines per the above format specification,
# and returns lines starting with OK or ERR indicating the validity
# of the request and optionally followed by additional keywords with
# more details.
#
# General result syntax:
#
#   OK/ERR keyword=value ...
#
# Defined keywords:
#
#   user=  The users name (login also understood)
#   password= The users password (for PROXYPASS login= cache_peer)
#   message= Error message or similar used as %o in error messages
#   (error also understood)
#   log=  String to be logged in access.log. Available as
#   %ea in logformat specifications
#
# If protocol=3.0 (the default) then URL escaping is used to protect
# each value in both requests and responses.
#
# If using protocol=2.5 then all values need to be enclosed in quotes
# if they may contain whitespace, or the whitespace escaped using \.
# And quotes or \ characters within the keyword value must be \ escaped.
#
# When using the concurrency= option the protocol is changed by
# introducing a query channel tag infront of the request/response.
# The query channel tag is a number between 0 and concurrency-1.
#
# Compatibility Note: The children= option was named concurrency= in
# Squid-2.5.STABLE3 and earlier, and was accepted as an alias for the
# duration of the Squid-2.5 releases to keep compatibility. However,
# the meaning of concurrency= option has changed in Squid-2.6 to match
# that of Squid-3 and the old syntax no longer works.
#
#Default:
# none
 
#  TAG: acl
# Defining an Access List
#
#    Every access list definition must begin with an aclname and acltype,  
#    followed by either type-specific arguments or a quoted filename that
#    they are read from.
#
# acl aclname acltype argument ...
# acl aclname acltype "file" ...
#
# when using "file", the file should contain one item per line.
#
# By default, regular expressions are CASE-SENSITIVE.  To make
# them case-insensitive, use the -i option.
#
# acl aclname src      ip-address/netmask ... (clients IP address)
# acl aclname src      addr1-addr2/netmask ... (range of addresses)
# acl aclname dst      ip-address/netmask ... (URL host's IP address)
# acl aclname myip     ip-address/netmask ... (local socket IP address)
#
# acl aclname arp      mac-address ... (xx:xx:xx:xx:xx:xx notation)
#   # The arp ACL requires the special configure option --enable-arp-acl.
#   # Furthermore, the arp ACL code is not portable to all operating systems.
#   # It works on Linux, Solaris, FreeBSD and some other *BSD variants.
#   #
#   # NOTE: Squid can only determine the MAC address for clients that are on
#   # the same subnet. If the client is on a different subnet, then Squid cannot
#   # find out its MAC address.
#
# acl aclname srcdomain   .foo.com ...    # reverse lookup, client IP
# acl aclname dstdomain   .foo.com ...    # Destination server from URL
# acl aclname srcdom_regex [-i] xxx ...   # regex matching client name
# acl aclname dstdom_regex [-i] xxx ...   # regex matching server
#   # For dstdomain and dstdom_regex a reverse lookup is tried if a IP
#   # based URL is used and no match is found. The name "none" is used
#   # if the reverse lookup fails.
#
# acl aclname time     [day-abbrevs]  [h1:m1-h2:m2]
#     # day-abbrevs:
#  # S - Sunday
#  # M - Monday
#  # T - Tuesday
#  # W - Wednesday
#  # H - Thursday
#  # F - Friday
#  # A - Saturday
#     # h1:m1 must be less than h2:m2
# acl aclname url_regex [-i] ^http:// ...     # regex matching on whole URL
# acl aclname urlpath_regex [-i] \.gif$ ... # regex matching on URL path
# acl aclname urllogin [-i] [^a-zA-Z0-9] ... # regex matching on URL login field
# acl aclname port     80 70 21 ...
# acl aclname port     0-1024 ...  # ranges allowed
# acl aclname myport   3128 ...  # (local socket TCP port)
# acl aclname myportname 3128 ...  # http(s)_port name
# acl aclname proto    HTTP FTP ...
# acl aclname method   GET POST ...
# acl aclname browser  [-i] regexp ...
#   # pattern match on User-Agent header (see also req_header below)
# acl aclname referer_regex  [-i] regexp ...
#   # pattern match on Referer header
#   # Referer is highly unreliable, so use with care
# acl aclname ident    username ...
# acl aclname ident_regex [-i] pattern ...
#   # string match on ident output.
#   # use REQUIRED to accept any non-null ident.
# acl aclname src_as   number ...
# acl aclname dst_as   number ...
#   # Except for access control, AS numbers can be used for
#   # routing of requests to specific caches. Here's an
#   # example for routing all requests for AS#1241 and only
#   # those to mycache.mydomain.net:
#   # acl asexample dst_as 1241
#   # cache_peer_access mycache.mydomain.net allow asexample
#   # cache_peer_access mycache_mydomain.net deny all
#
# acl aclname proxy_auth [-i] username ...
# acl aclname proxy_auth_regex [-i] pattern ...
#   # list of valid usernames
#   # use REQUIRED to accept any valid username.
#   #
#   # NOTE: when a Proxy-Authentication header is sent but it is not
#   # needed during ACL checking the username is NOT logged
#   # in access.log.
#   #
#   # NOTE: proxy_auth requires a EXTERNAL authentication program
#   # to check username/password combinations (see
#   # auth_param directive).
#   #
#   # NOTE: proxy_auth can't be used in a transparent proxy as
#   # the browser needs to be configured for using a proxy in order
#   # to respond to proxy authentication.
#
# acl aclname snmp_community string ...
#   # A community string to limit access to your SNMP Agent
#   # Example:
#   #
#   # acl snmppublic snmp_community public
#
# acl aclname maxconn number
#   # This will be matched when the client's IP address has
#   # more than <number> HTTP connections established.
#
# acl aclname max_user_ip [-s] number
#   # This will be matched when the user attempts to log in from more
#   # than <number> different ip addresses. The authenticate_ip_ttl
#   # parameter controls the timeout on the ip entries.
#   # If -s is specified the limit is strict, denying browsing
#   # from any further IP addresses until the ttl has expired. Without
#   # -s Squid will just annoy the user by "randomly" denying requests.
#   # (the counter is reset each time the limit is reached and a
#   # request is denied)
#   # NOTE: in acceleration mode or where there is mesh of child proxies,
#   # clients may appear to come from multiple addresses if they are
#   # going through proxy farms, so a limit of 1 may cause user problems.
#
# acl aclname req_mime_type mime-type ...
#   # regex match against the mime type of the request generated
#   # by the client. Can be used to detect file upload or some
#   # types HTTP tunneling requests.
#   # NOTE: This does NOT match the reply. You cannot use this
#   # to match the returned file type.
#
# acl aclname req_header header-name [-i] any\.regex\.here
#   # regex match against any of the known request headers.  May be
#   # thought of as a superset of "browser", "referer" and "mime-type"
#   # ACLs.
#
# acl aclname rep_mime_type mime-type ...
#   # regex match against the mime type of the reply received by
#   # squid. Can be used to detect file download or some
#   # types HTTP tunneling requests.
#   # NOTE: This has no effect in http_access rules. It only has
#   # effect in rules that affect the reply data stream such as
#   # http_reply_access.
#
# acl aclname rep_header header-name [-i] any\.regex\.here
#   # regex match against any of the known reply headers. May be
#   # thought of as a superset of "browser", "referer" and "mime-type"
#   # ACLs.
#   #
#   # Example:
#   #
#   # acl many_spaces rep_header Content-Disposition -i [[:space:]]{3,}
#
# acl aclname external class_name [arguments...]
#   # external ACL lookup via a helper class defined by the
#   # external_acl_type directive.
#
# acl aclname urlgroup group1 ...
#   # match against the urlgroup as indicated by redirectors
#
# acl aclname user_cert attribute values...
#   # match against attributes in a user SSL certificate
#   # attribute is one of DN/C/O/CN/L/ST
#
# acl aclname ca_cert attribute values...
#   # match against attributes a users issuing CA SSL certificate
#   # attribute is one of DN/C/O/CN/L/ST
#
# acl aclname ext_user username ...
# acl aclname ext_user_regex [-i] pattern ...
#   # string match on username returned by external acl helper
#   # use REQUIRED to accept any non-null user name.
#
#Examples:
#acl macaddress arp 09:00:2b:23:45:67
#acl myexample dst_as 1241
#acl password proxy_auth REQUIRED
#acl fileupload req_mime_type -i ^multipart/form-data$
#acl javascript rep_mime_type -i ^application/x-javascript$
#
#Recommended minimum configuration:
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
#
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443  # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210  # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280  # http-mgmt
acl Safe_ports port 488  # gss-http
acl Safe_ports port 591  # filemaker
acl Safe_ports port 777  # multiling http
acl Safe_ports port 631  # cups
acl Safe_ports port 873  # rsync
acl Safe_ports port 901  # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
acl multipostes src 192.xxx.xxx.xxx/24  sa j'ai changer#######################
#  TAG: http_access
# Allowing or Denying access based on defined access lists
#
# Access to the HTTP port:
# http_access allow|deny [!]aclname ...
#
# NOTE on default values:
#
# If there are no "access" lines present, the default is to deny
# the request.
#
# If none of the "access" lines cause a match, the default is the
# opposite of the last line in the list.  If the last line was
# deny, the default is allow.  Conversely, if the last line
# is allow, the default will be deny.  For these reasons, it is a
# good idea to have an "deny all" or "allow all" entry at the end
# of your access lists to avoid potential confusion.
#
#Default:
# http_access deny all
#
http_access allow all  sa j'ai changer ######################################
#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Only allow purge requests from localhost
http_access allow purge localhost
http_access deny purge
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
 
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
#http_access allow localnet
http_access allow localhost
 
# And finally deny all other access to this proxy
http_access deny all
 
#  TAG: http_access2
# Allowing or Denying access based on defined access lists
#
# Identical to http_access, but runs after redirectors. If not set
# then only http_access is used.
#
#Default:
# none
 
#  TAG: http_reply_access
# Allow replies to client requests. This is complementary to http_access.
#
# http_reply_access allow|deny [!] aclname ...
#
# NOTE: if there are no access lines present, the default is to allow
# all replies
#
# If none of the access lines cause a match the opposite of the
# last line will apply. Thus it is good practice to end the rules
# with an "allow all" or "deny all" entry.
#
#Default:
# http_reply_access allow all
 
#  TAG: icp_access
# Allowing or Denying access to the ICP port based on defined
# access lists
#
# icp_access  allow|deny [!]aclname ...
#
# See http_access for details
#
#Default:
# icp_access deny all
#
#Allow ICP queries from local networks only
icp_access allow localnet
icp_access deny all
 
#  TAG: htcp_access
# Allowing or Denying access to the HTCP port based on defined
# access lists
#
# htcp_access  allow|deny [!]aclname ...
#
# See http_access for details
#
# NOTE: The default if no htcp_access lines are present is to
# deny all traffic. This default may cause problems with peers
# using the htcp or htcp-oldsquid options.
#
#Default:
# htcp_access deny all
#
#Allow HTCP queries from local networks only
# htcp_access allow localnet
# htcp_access deny all
 
#  TAG: htcp_clr_access
# Allowing or Denying access to purge content using HTCP based
# on defined access lists
#
# htcp_clr_access  allow|deny [!]aclname ...
#
# See http_access for details
#
##Allow HTCP CLR requests from trusted peers
#acl htcp_clr_peer src 172.16.1.2
#htcp_clr_access allow htcp_clr_peer
#
#Default:
# htcp_clr_access deny all
 
#  TAG: miss_access
# Use to force your neighbors to use you as a sibling instead of
# a parent.  For example:
#
#  acl localclients src 172.16.0.0/16
#  miss_access allow localclients
#  miss_access deny  !localclients
#
# This means only your local clients are allowed to fetch
# MISSES and all other clients can only fetch HITS.
#
# By default, allow all clients who passed the http_access rules
# to fetch MISSES from us.
#
#Default setting:
# miss_access allow all
 
#  TAG: ident_lookup_access
# A list of ACL elements which, if matched, cause an ident
# (RFC931) lookup to be performed for this request.  For
# example, you might choose to always perform ident lookups
# for your main multi-user Unix boxes, but not for your Macs
# and PCs.  By default, ident lookups are not performed for
# any requests.
#
# To enable ident lookups for specific client addresses, you
# can follow this example:
#
# acl ident_aware_hosts src 198.168.1.0/255.255.255.0
# ident_lookup_access allow ident_aware_hosts
# ident_lookup_access deny all
#
# Only src type ACL checks are fully supported.  A src_domain
# ACL might work at times, but it will not always provide
# the correct result.
#
#Default:
# ident_lookup_access deny all
 
#  TAG: reply_body_max_size bytes deny acl acl...
# This option specifies the maximum size of a reply body in bytes.
# It can be used to prevent users from downloading very large files,
# such as MP3's and movies. When the reply headers are received,
# the reply_body_max_size lines are processed, and the first line with
# a result of "deny" is used as the maximum body size for this reply.
# This size is checked twice. First when we get the reply headers,
# we check the content-length value.  If the content length value exists
# and is larger than the allowed size, the request is denied and the
# user receives an error message that says "the request or reply
# is too large." If there is no content-length, and the reply
# size exceeds this limit, the client's connection is just closed
# and they will receive a partial reply.
#
# WARNING: downstream caches probably can not detect a partial reply
# if there is no content-length header, so they will cache
# partial responses and give them out as hits.  You should NOT
# use this option if you have downstream caches.
#
# If you set this parameter to zero (the default), there will be
# no limit imposed.
#
#Default:
# reply_body_max_size 0 allow all
 
#  TAG: authenticate_ip_shortcircuit_access
# Access list determining when shortcicuiting the authentication process
# based on source IP cached credentials is acceptable. Use this to deny
# using the ip auth cache on requests from child proxies or other source
# ip's having multiple users.
#
# See also authenticate_ip_shortcircuit_ttl directive
#
#Default:
# none
 
 
# OPTIONS FOR X-Forwarded-For
# -----------------------------------------------------------------------------
 
#  TAG: follow_x_forwarded_for
# Allowing or Denying the X-Forwarded-For header to be followed to
# find the original source of a request.
#
# Requests may pass through a chain of several other proxies
# before reaching us.  The X-Forwarded-For header will contain a
# comma-separated list of the IP addresses in the chain, with the
# rightmost address being the most recent.
#
# If a request reaches us from a source that is allowed by this
# configuration item, then we consult the X-Forwarded-For header
# to see where that host received the request from.  If the
# X-Forwarded-For header contains multiple addresses, and if
# acl_uses_indirect_client is on, then we continue backtracking
# until we reach an address for which we are not allowed to
# follow the X-Forwarded-For header, or until we reach the first
# address in the list.  (If acl_uses_indirect_client is off, then
# it's impossible to backtrack through more than one level of
# X-Forwarded-For addresses.)
#
# The end result of this process is an IP address that we will
# refer to as the indirect client address.  This address may
# be treated as the client address for access control, delay
# pools and logging, depending on the acl_uses_indirect_client,
# delay_pool_uses_indirect_client and log_uses_indirect_client
# options.
#
# SECURITY CONSIDERATIONS:
#
#  Any host for which we follow the X-Forwarded-For header
#  can place incorrect information in the header, and Squid
#  will use the incorrect information as if it were the
#  source address of the request.  This may enable remote
#  hosts to bypass any access control restrictions that are
#  based on the client's source addresses.
#
# For example:
#
#  acl localhost src 127.0.0.1
#  acl my_other_proxy srcdomain .proxy.example.com
#  follow_x_forwarded_for allow localhost
#  follow_x_forwarded_for allow my_other_proxy
#
#Default:
# follow_x_forwarded_for deny all
 
#  TAG: acl_uses_indirect_client on|off
# Controls whether the indirect client address
# (see follow_x_forwarded_for) is used instead of the
# direct client address in acl matching.
#
#Default:
# acl_uses_indirect_client on
 
#  TAG: delay_pool_uses_indirect_client on|off
# Controls whether the indirect client address
# (see follow_x_forwarded_for) is used instead of the
# direct client address in delay pools.
#
#Default:
# delay_pool_uses_indirect_client on
 
#  TAG: log_uses_indirect_client on|off
# Controls whether the indirect client address
# (see follow_x_forwarded_for) is used instead of the
# direct client address in the access log.
#
#Default:
# log_uses_indirect_client on
 
 
# SSL OPTIONS
# -----------------------------------------------------------------------------
 
#  TAG: ssl_unclean_shutdown
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# Some browsers (especially MSIE) bugs out on SSL shutdown
# messages.
#
#Default:
# ssl_unclean_shutdown off
 
#  TAG: ssl_engine
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# The OpenSSL engine to use. You will need to set this if you
# would like to use hardware SSL acceleration for example.
#
#Default:
# none
 
#  TAG: sslproxy_client_certificate
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# Client SSL Certificate to use when proxying https:// URLs
#
#Default:
# none
 
#  TAG: sslproxy_client_key
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# Client SSL Key to use when proxying https:// URLs
#
#Default:
# none
 
#  TAG: sslproxy_version
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# SSL version level to use when proxying https:// URLs
#
#Default:
# sslproxy_version 1
 
#  TAG: sslproxy_options
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# SSL engine options to use when proxying https:// URLs
#
#Default:
# none
 
#  TAG: sslproxy_cipher
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# SSL cipher list to use when proxying https:// URLs
#
#Default:
# none
 
#  TAG: sslproxy_cafile
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# file containing CA certificates to use when verifying server
# certificates while proxying https:// URLs
#
#Default:
# none
 
#  TAG: sslproxy_capath
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# directory containing CA certificates to use when verifying
# server certificates while proxying https:// URLs
#
#Default:
# none
 
#  TAG: sslproxy_flags
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# Various flags modifying the use of SSL while proxying https:// URLs:
#     DONT_VERIFY_PEER    Accept certificates even if they fail to
#    verify.
#     NO_DEFAULT_CA       Don't use the default CA list built in
#    to OpenSSL.
#
#Default:
# none
 
#  TAG: sslpassword_program
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# Specify a program used for entering SSL key passphrases
# when using encrypted SSL certificate keys. If not specified
# keys must either be unencrypted, or Squid started with the -N
# option to allow it to query interactively for the passphrase.
#
#Default:
# none
 
 
# NETWORK OPTIONS
# -----------------------------------------------------------------------------
 
#  TAG: http_port
# Usage: port [options]
#  hostname:port [options]
#  1.2.3.4:port [options]
#
# The socket addresses where Squid will listen for HTTP client
# requests.  You may specify multiple socket addresses.
# There are three forms: port alone, hostname with port, and
# IP address with port.  If you specify a hostname or IP
# address, Squid binds the socket to that specific
# address.  This replaces the old 'tcp_incoming_address'
# option.  Most likely, you do not need to bind to a specific
# address, so you can use the port number alone.
#
# If you are running Squid in accelerator mode, you
# probably want to listen on port 80 also, or instead.
#
# The -I command line option will override the *first* port
# specified here.
#
# You may specify multiple socket addresses on multiple lines.
#
# Options:
#
#    transparent Support for transparent interception of
#   outgoing requests without browser settings.
#
#    tproxy Support Linux TPROXY for spoofing outgoing
#   connections using the client IP address.
#
#    accel Accelerator mode. See also the related vhost,
#   vport and defaultsite directives.
#
#    defaultsite=domainname
#   What to use for the Host: header if it is not present
#   in a request. Determines what site (not origin server)
#   accelerators should consider the default.
#   Defaults to visible_hostname:port if not set
#   May be combined with vport=NN to override the port number.
#   Implies accel.
#
#    vhost Accelerator mode using Host header for virtual
#   domain support. Implies accel.
#
#    vport Accelerator with IP based virtual host support.
#   Implies accel.
#
#    vport=NN As above, but uses specified port number rather
#   than the http_port number. Implies accel.
#
#    allow-direct Allow direct forwarding in accelerator mode. Normally
#      accelerated requests is denied direct forwarding as it
#   never_direct was used.
#
#    urlgroup= Default urlgroup to mark requests with (see
#   also acl urlgroup and url_rewrite_program)
#
#    protocol= Protocol to reconstruct accelerated requests with.
#   Defaults to http.
#
#    no-connection-auth
#   Prevent forwarding of Microsoft connection oriented
#   authentication (NTLM, Negotiate and Kerberos)
#
#    act-as-origin
#      Act is if this Squid is the origin server.
#   This currently means generate own Date: and
#   Expires: headers. Implies accel.
#
#    http11 Enables HTTP/1.1 support to clients. The HTTP/1.1
#   support is still incomplete with an internal HTTP/1.0
#   hop, but should work with most clients. The main
#   HTTP/1.1 features missing due to this is forwarding
#   of requests using chunked transfer encoding (results
#   in 411) and forwarding of 1xx responses (silently
#   dropped)
#
#    name= Specifies a internal name for the port. Defaults to
#   the port specification (port or addr:port)
#
#    tcpkeepalive[=idle,interval,timeout]
#   Enable TCP keepalive probes of idle connections
#   idle is the initial time before TCP starts probing
#   the connection, interval how often to probe, and
#   timeout the time before giving up.
#
# If you run Squid on a dual-homed machine with an internal
# and an external interface we recommend you to specify the
# internal address:port in http_port. This way Squid will only be
# visible on the internal address.
#
# Squid normally listens to port 3128
http_port 3128
 
#  TAG: https_port
# Note: This option is only available if Squid is rebuilt with the
#       --enable-ssl option
#
# Usage:  [ip:]port cert=certificate.pem [key=key.pem] [options...]
#
# The socket address where Squid will listen for HTTPS client
# requests.
#
# This is really only useful for situations where you are running
# squid in accelerator mode and you want to do the SSL work at the
# accelerator level.
#
# You may specify multiple socket addresses on multiple lines,
# each with their own SSL certificate and/or options.
#
# Options:
#
# In addition to the options specified for http_port the folling
# SSL related options is supported:
#
#    cert= Path to SSL certificate (PEM format).
#
#    key=  Path to SSL private key file (PEM format)
#   if not specified, the certificate file is
#   assumed to be a combined certificate and
#   key file.
#
#    version= The version of SSL/TLS supported
#       1 automatic (default)
#       2 SSLv2 only
#       3 SSLv3 only
#       4 TLSv1 only
#
#    cipher= Colon separated list of supported ciphers.
#
#    options= Various SSL engine options. The most important
#   being:
#       NO_SSLv2  Disallow the use of SSLv2
#       NO_SSLv3  Disallow the use of SSLv3
#       NO_TLSv1  Disallow the use of TLSv1
#       SINGLE_DH_USE Always create a new key when using
#          temporary/ephemeral DH key exchanges
#   See src/ssl_support.c or OpenSSL SSL_CTX_set_options
#   documentation for a complete list of options.
#
#    clientca= File containing the list of CAs to use when
#   requesting a client certificate.
#
#    cafile= File containing additional CA certificates to
#   use when verifying client certificates. If unset
#   clientca will be used.
#
#    capath= Directory containing additional CA certificates
#   and CRL lists to use when verifying client certificates.
#
#    crlfile= File of additional CRL lists to use when verifying
#   the client certificate, in addition to CRLs stored in
#   the capath. Implies VERIFY_CRL flag below.
#
#    dhparams= File containing DH parameters for temporary/ephemeral
#   DH key exchanges.
#
#    sslflags= Various flags modifying the use of SSL:
#       DELAYED_AUTH
#    Don't request client certificates
#    immediately, but wait until acl processing
#    requires a certificate (not yet implemented).
#       NO_DEFAULT_CA
#    Don't use the default CA lists built in
#    to OpenSSL.
#       NO_SESSION_REUSE
#    Don't allow for session reuse. Each connection
#    will result in a new SSL session.
#       VERIFY_CRL
#    Verify CRL lists when accepting client
#    certificates.
#       VERIFY_CRL_ALL
#    Verify CRL lists for all certificates in the
#    client certificate chain.
#
#    sslcontext= SSL session ID context identifier.
#
#
#Default:
# none
 
#  TAG: tcp_outgoing_tos
# Allows you to select a TOS/Diffserv value to mark outgoing
# connections with, based on the username or source address
# making the request.
#
# tcp_outgoing_tos ds-field [!]aclname ...
#
# Example where normal_service_net uses the TOS value 0x00
# and good_service_net uses 0x20
#
# acl normal_service_net src 10.0.0.0/255.255.255.0
# acl good_service_net src 10.0.1.0/255.255.255.0
# tcp_outgoing_tos 0x00 normal_service_net
# tcp_outgoing_tos 0x20 good_service_net
#
# TOS/DSCP values really only have local significance - so you should
# know what you're specifying. For more information, see RFC2474 and
# RFC3260.
#
# The TOS/DSCP byte must be exactly that - a octet value  0 - 255, or
# "default" to use whatever default your host has. Note that in
# practice often only values 0 - 63 is usable as the two highest bits
# have been redefined for use by ECN (RFC3168).
#
# Processing proceeds in the order specified, and stops at first fully
# matching line.
#
# Note: The use of this directive using client dependent ACLs is
# incompatible with the use of server side persistent connections. To
# ensure correct results it is best to set server_persisten_connections
# to off when using this directive in such configurations.
#
#Default:
# none
 
#  TAG: tcp_outgoing_address
# Allows you to map requests to different outgoing IP addresses
# based on the username or source address of the user making
# the request.
#
# tcp_outgoing_address ipaddr [[!]aclname] ...
#
# Example where requests from 10.0.0.0/24 will be forwarded
# with source address 10.1.0.1, 10.0.2.0/24 forwarded with
# source address 10.1.0.2 and the rest will be forwarded with
# source address 10.1.0.3.
#
# acl normal_service_net src 10.0.0.0/24
# acl good_service_net src 10.0.1.0/24 10.0.2.0/24
# tcp_outgoing_address 10.1.0.1 normal_service_net
# tcp_outgoing_address 10.1.0.2 good_service_net
# tcp_outgoing_address 10.1.0.3
#
# Processing proceeds in the order specified, and stops at first fully
# matching line.
#
# Note: The use of this directive using client dependent ACLs is
# incompatible with the use of server side persistent connections. To
# ensure correct results it is best to set server_persistent_connections
# to off when using this directive in such configurations.
#
#Default:
# none
 
#  TAG: zph_mode
# This option enables packet level marking of HIT/MISS responses,
# either using IP TOS or socket priority.
#     off  Feature disabled
#     tos  Set the IP TOS/Diffserv field
#     priority Set the socket priority (may get mapped to TOS by OS,
#   otherwise only usable in local rulesets)
#     option Embed the mark in an IP option field. See also
#       zph_option.
#
# See also tcp_outgoing_tos for details/requirements about TOS usage.
#
#Default:
# zph_mode off
 
#  TAG: zph_local
# Allows you to select a TOS/Diffserv/Priority value to mark local hits.
# Default: 0 (disabled).
#
#Default:
# zph_local 0
 
#  TAG: zph_sibling
# Allows you to select a TOS/Diffserv/Priority value to mark sibling hits.
# Default: 0 (disabled).
#
#Default:
# zph_sibling 0
 
#  TAG: zph_parent
# Allows you to select a TOS/Diffserv/Priority value to mark parent hits.  
# Default: 0 (disabled).
#
#Default:
# zph_parent 0
 
#  TAG: zph_option
# The IP option to use when zph_mode is set to "option". Defaults to
# 136 which is officially registered as "SATNET Stream ID".
#
#Default:
# zph_option 136
 
 
# OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM
# -----------------------------------------------------------------------------
 
#  TAG: cache_peer
# To specify other caches in a hierarchy, use the format:
#
#  cache_peer hostname type http-port icp-port [options]
#
# For example,
#
# #                                        proxy  icp
# #          hostname             type     port   port  options
# #          -------------------- -------- ----- -----  -----------
# cache_peer parent.foo.net       parent    3128  3130  proxy-only default
# cache_peer sib1.foo.net         sibling   3128  3130  proxy-only
# cache_peer sib2.foo.net         sibling   3128  3130  proxy-only
#
#       type:  either 'parent', 'sibling', or 'multicast'.
#
# proxy-port:  The port number where the cache listens for proxy
#       requests.
#
#   icp-port:  Used for querying neighbor caches about
#       objects.  To have a non-ICP neighbor
#       specify '7' for the ICP port and make sure the
#       neighbor machine has the UDP echo port
#       enabled in its /etc/inetd.conf file.
#  NOTE: Also requires icp_port option enabled to send/receive
#        requests via this method.
#
#     options: proxy-only
#       weight=n
#       ttl=n
#       no-query
#       default
#       round-robin
#       carp
#       multicast-responder
#       multicast-siblings
#       closest-only
#       no-digest
#       no-netdb-exchange
#       no-delay
#       login=user:password | PASS | *:password
#       connect-timeout=nn
#       digest-url=url
#       allow-miss
#       max-conn=n
#       htcp
#       htcp-oldsquid
#       originserver
#       userhash
#       sourcehash
#       name=xxx
#       monitorurl=url
#       monitorsize=sizespec
#       monitorinterval=seconds
#       monitortimeout=seconds
#       forceddomain=name
#       ssl
#       sslcert=/path/to/ssl/certificate
#       sslkey=/path/to/ssl/key
#       sslversion=1|2|3|4
#       sslcipher=...
#       ssloptions=...
#       front-end-https[=on|auto]
#       connection-auth[=on|off|auto]
#       idle=n
#       http11
#
#       use 'proxy-only' to specify objects fetched
#       from this cache should not be saved locally.
#
#       use 'weight=n' to affect the selection of a peer
#       during any weighted peer-selection mechanisms.
#       The weight must be an integer; default is 1,
#       larger weights are favored more.
#       This option does not affect parent selection if a peering
#       protocol is not in use.
#
#       use 'ttl=n' to specify a IP multicast TTL to use
#       when sending an ICP queries to this address.
#       Only useful when sending to a multicast group.
#       Because we don't accept ICP replies from random
#       hosts, you must configure other group members as
#       peers with the 'multicast-responder' option below.
#
#       use 'no-query' to NOT send ICP queries to this
#       neighbor.
#
#       use 'default' if this is a parent cache which can
#       be used as a "last-resort" if a peer cannot be located
#       by any of the peer-selection mechanisms.
#       If specified more than once, only the first is used.
#
#       use 'round-robin' to define a set of parents which
#       should be used in a round-robin fashion in the
#       absence of any ICP queries.
#
#       use 'carp' to define a set of parents which should
#       be used as a CARP array. The requests will be
#       distributed among the parents based on the CARP load
#       balancing hash function based on their weight.
#
#       'multicast-responder' indicates the named peer
#       is a member of a multicast group.  ICP queries will
#       not be sent directly to the peer, but ICP replies
#       will be accepted from it.
#
#       the 'multicast-siblings' option is meant to be used
#       only for cache peers of type "multicast". It instructs
#       Squid that ALL members of this multicast group have
#       "sibling" relationship with it, not "parent".  This is
#       an optimization that avoids useless multicast queries
#       to a multicast group when the requested object would
#       be fetched only from a "parent" cache, anyway.  It's
#       useful, e.g., when configuring a pool of redundant
#       Squid proxies, being members of the same
#       multicast group.
#
#       'closest-only' indicates that, for ICP_OP_MISS
#       replies, we'll only forward CLOSEST_PARENT_MISSes
#       and never FIRST_PARENT_MISSes.
#
#       use 'no-digest' to NOT request cache digests from
#       this neighbor.
#
#       'no-netdb-exchange' disables requesting ICMP
#       RTT database (NetDB) from the neighbor.
#
#       use 'no-delay' to prevent access to this neighbor
#       from influencing the delay pools.
#
#       use 'login=user:password' if this is a personal/workgroup
#       proxy and your parent requires proxy authentication.
#       Note: The string can include URL escapes (i.e. %20 for
#       spaces). This also means % must be written as %%.
#
#       use 'login=PASS' if users must authenticate against
#       the upstream proxy or in the case of a reverse proxy
#       configuration, the origin web server.  This will pass
#       the users credentials as they are to the peer.
#       Note: To combine this with local authentication the Basic
#       authentication scheme must be used, and both servers must
#       share the same user database as HTTP only allows for
#       a single login (one for proxy, one for origin server).
#       Also be warned this will expose your users proxy
#       password to the peer. USE WITH CAUTION
#
#       use 'login=*:password' to pass the username to the
#       upstream cache, but with a fixed password. This is meant
#       to be used when the peer is in another administrative
#       domain, but it is still needed to identify each user.
#       The star can optionally be followed by some extra
#       information which is added to the username. This can
#       be used to identify this proxy to the peer, similar to
#       the login=username:password option above.
#
#       use 'connect-timeout=nn' to specify a peer
#       specific connect timeout (also see the
#       peer_connect_timeout directive)
#
#       use 'digest-url=url' to tell Squid to fetch the cache
#       digest (if digests are enabled) for this host from
#       the specified URL rather than the Squid default
#       location.
#
#       use 'allow-miss' to disable Squid's use of only-if-cached
#       when forwarding requests to siblings. This is primarily
#       useful when icp_hit_stale is used by the sibling. To
#       extensive use of this option may result in forwarding
#       loops, and you should avoid having two-way peerings
#       with this option. (for example to deny peer usage on
#       requests from peer by denying cache_peer_access if the
#       source is a peer)
#
#       use 'max-conn=n' to limit the amount of connections Squid
#       may open to this peer.
#
#       use 'htcp' to send HTCP, instead of ICP, queries
#       to the neighbor.  You probably also want to
#       set the "icp port" to 4827 instead of 3130.
#       You must also allow this Squid htcp_access and
#       http_access in the peer Squid configuration.
#
#       use 'htcp-oldsquid' to send HTCP to old Squid versions
#       You must also allow this Squid htcp_access and
#       http_access in the peer Squid configuration.
#
#       'originserver' causes this parent peer to be contacted as
#       a origin server. Meant to be used in accelerator setups.
#
#       use 'userhash' to load-balance amongst a set of parents
#       based on the client proxy_auth or ident username.
#
#       use 'sourcehash' to load-balance amongst a set of parents
#       based on the client source ip.
#
#       use 'name=xxx' if you have multiple peers on the same
#       host but different ports. This name can be used to
#       differentiate the peers in cache_peer_access and similar
#       directives.
#
#       use 'monitorurl=url' to have periodically request a given
#       URL from the peer, and only consider the peer as alive
#       if this monitoring is successful (default none)
#
#       use 'monitorsize=min[-max]' to limit the size range of
#       'monitorurl' replies considered valid. Defaults to 0 to
#       accept any size replies as valid.
#
#       use 'monitorinterval=seconds' to change frequency of
#       how often the peer is monitored with 'monitorurl'
#       (default 300 for a 5 minute interval). If set to 0
#       then monitoring is disabled even if a URL is defined.
#
#       use 'monitortimeout=seconds' to change the timeout of
#       'monitorurl'. Defaults to 'monitorinterval'.
#
#       use 'forceddomain=name' to forcibly set the Host header
#       of requests forwarded to this peer. Useful in accelerator
#       setups where the server (peer) expects a certain domain
#       name and using redirectors to feed this domain name
#       is not feasible.
#
#       use 'ssl' to indicate connections to this peer should
#       be SSL/TLS encrypted.
#
#       use 'sslcert=/path/to/ssl/certificate' to specify a client
#       SSL certificate to use when connecting to this peer.
#
#       use 'sslkey=/path/to/ssl/key' to specify the private SSL
#       key corresponding to sslcert above. If 'sslkey' is not
#       specified 'sslcert' is assumed to reference a
#       combined file containing both the certificate and the key.
#
#       Notes:
#        
#       On Debian/Ubuntu system a default snakeoil certificate is
#       available in /etc/ssl and users can set:
#        
#         cert=/etc/ssl/certs/ssl-cert-snakeoil.pem
#        
#       and
#        
#         key=/etc/ssl/private/ssl-cert-snakeoil.key
#        
#       for testing.
#
#       use sslversion=1|2|3|4 to specify the SSL version to use
#       when connecting to this peer
#   1 = automatic (default)
#   2 = SSL v2 only
#   3 = SSL v3 only
#   4 = TLS v1 only
#
#       use sslcipher=... to specify the list of valid SSL ciphers
#       to use when connecting to this peer.
#
#       use ssloptions=... to specify various SSL engine options:
#   NO_SSLv2  Disallow the use of SSLv2
#   NO_SSLv3  Disallow the use of SSLv3
#   NO_TLSv1  Disallow the use of TLSv1
#       See src/ssl_support.c or the OpenSSL documentation for
#       a more complete list.
#
#       use sslcafile=... to specify a file containing
#       additional CA certificates to use when verifying the
#       peer certificate.
#
#       use sslcapath=... to specify a directory containing
#       additional CA certificates to use when verifying the
#       peer certificate.
#
#       use sslcrlfile=... to specify a certificate revocation
#       list file to use when verifying the peer certificate.
#
#       use sslflags=... to specify various flags modifying the
#       SSL implementation:
#   DONT_VERIFY_PEER
#    Accept certificates even if they fail to
#    verify.
#   NO_DEFAULT_CA
#    Don't use the default CA list built in
#    to OpenSSL.
#
#       use ssldomain= to specify the peer name as advertised
#       in it's certificate. Used for verifying the correctness
#       of the received peer certificate. If not specified the
#       peer hostname will be used.
#
#       use front-end-https to enable the "Front-End-Https: On"
#       header needed when using Squid as a SSL frontend in front
#       of Microsoft OWA. See MS KB document Q307347 for details
#       on this header. If set to auto the header will
#       only be added if the request is forwarded as a https://
#       URL.
#
#       use connection-auth=off to tell Squid that this peer does
#       not support Microsoft connection oriented authentication,
#       and any such challenges received from there should be
#       ignored. Default is auto to automatically determine the
#       status of the peer.
#
#       use idle=n to specify a minimum number of idle connections
#       that should be kept open to this peer.
#
#       use http11 to send requests using HTTP/1.1 to this peer.
#       Note: The HTTP/1.1 support is still incomplete, with an
#       internal HTTP/1.0 hop. As result 1xx responses will not
#       be forwarded.
#
#Default:
# none
 
#  TAG: cache_peer_domain
# Use to limit the domains for which a neighbor cache will be
# queried.  Usage:
#
# cache_peer_domain cache-host domain [domain ...]
# cache_peer_domain cache-host !domain
#
# For example, specifying
#
#  cache_peer_domain parent.foo.net .edu
#
# has the effect such that UDP query packets are sent to
# 'bigserver' only when the requested object exists on a
# server in the .edu domain.  Prefixing the domain name
# with '!' means the cache will be queried for objects
# NOT in that domain.
#
# NOTE: * Any number of domains may be given for a cache-host,
#    either on the same or separate lines.
#  * When multiple domains are given for a particular
#    cache-host, the first matched domain is applied.
#  * Cache hosts with no domain restrictions are queried
#    for all requests.
#  * There are no defaults.
#  * There is also a 'cache_peer_access' tag in the ACL
#    section.
#
#Default:
# none
 
#  TAG: cache_peer_access
# Similar to 'cache_peer_domain' but provides more flexibility by
# using ACL elements.
#
# cache_peer_access cache-host allow|deny [!]aclname ...
#
# The syntax is identical to 'http_access' and the other lists of
# ACL elements.  See the comments for 'http_access' below, or
# the Squid FAQ (http://www.squid-cache.org/FAQ/FAQ-10.html).
#
#Default:
# none
 
#  TAG: neighbor_type_domain
# usage: neighbor_type_domain neighbor parent|sibling domain domain ...
#
# Modifying the neighbor type for specific domains is now
# possible.  You can treat some domains differently than the the
# default neighbor type specified on the 'cache_peer' line.
# Normally it should only be necessary to list domains which
# should be treated differently because the default neighbor type
# applies for hostnames which do not match domains listed here.
#
#EXAMPLE:
# cache_peer cache.foo.org parent 3128 3130
# neighbor_type_domain cache.foo.org sibling .com .net
# neighbor_type_domain cache.foo.org sibling .au .de
#
#Default:
# none
 
#  TAG: dead_peer_timeout (seconds)
# This controls how long Squid waits to declare a peer cache
# as "dead."  If there are no ICP replies received in this
# amount of time, Squid will declare the peer dead and not
# expect to receive any further ICP replies.  However, it
# continues to send ICP queries, and will mark the peer as
# alive upon receipt of the first subsequent ICP reply.
#
# This timeout also affects when Squid expects to receive ICP
# replies from peers.  If more than 'dead_peer' seconds have
# passed since the last ICP reply was received, Squid will not
# expect to receive an ICP reply on the next query.  Thus, if
# your time between requests is greater than this timeout, you
# will see a lot of requests sent DIRECT to origin servers
# instead of to your parents.
#
#Default:
# dead_peer_timeout 10 seconds
 
#  TAG: hierarchy_stoplist
# A list of words which, if found in a URL, cause the object to
# be handled directly by this cache.  In other words, use this
# to not query neighbor caches for certain objects.  You may
# list this option multiple times. Note: never_direct overrides
# this option.
#We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?
 
 
# MEMORY CACHE OPTIONS
# -----------------------------------------------------------------------------
 
#  TAG: cache_mem (bytes)
# NOTE: THIS PARAMETER DOES NOT SPECIFY THE MAXIMUM PROCESS SIZE.
# IT ONLY PLACES A LIMIT ON HOW MUCH ADDITIONAL MEMORY SQUID WILL
# USE AS A MEMORY CACHE OF OBJECTS. SQUID USES MEMORY FOR OTHER
# THINGS AS WELL. SEE THE SQUID FAQ SECTION 8 FOR DETAILS.
#
# 'cache_mem' specifies the ideal amount of memory to be used
# for:
#  * In-Transit objects
#  * Hot Objects
#  * Negative-Cached objects
#
# Data for these objects are stored in 4 KB blocks.  This
# parameter specifies the ideal upper limit on the total size of
# 4 KB blocks allocated.  In-Transit objects take the highest
# priority.
#
# In-transit objects have pr


Message édité par maximer1664 le 14-12-2010 à 14:33:25
Reply

Marsh Posté le 14-12-2010 à 14:31:56    

maximer1664 a écrit :


PS: a la fin de mes deux stage il faut que je choisis qu'est qui ma plus plu. Et je préfère le HARDWARE car on est pas tous le temps enfermer dans un bureau et on est au contact du client , meme si le travail est un petit peu moin varié que le réseau je préfère nettement cela. Est ce normal ?


 :D  t'inquiètes pas, tu ne te feras pas interner pour ça.
Par contre niveau débouché et perspective d'évolution tu seras très très limité. Garde ça comme hobbie, en bidouillant des bécanes à la maison.
Et si c'est pour ton rapport de stage, met en avant le contact avec les utilisateurs, pas le hardware.


Message édité par akabis le 14-12-2010 à 14:34:33
Reply

Marsh Posté le 14-12-2010 à 16:43:21    

merci bien AKABIS, mon rapport de stage je le fais sur les différents serveur que j'ai crées ;)

Reply

Marsh Posté le 14-12-2010 à 20:52:56    

Je ne vais pas t'aider sur squidguard, mais t'apporter mon expérience personnelle !
 
J'ai voulu faire la même chose que toi a savoir squid + squidguard, je n'ai jamais réussi à faire tourner squidguard ! (bon j'ai pas cherché des semaines), et un gars de ma classe m'a conseillé de tester dansguardian, qui à la même fonction.
 
Et bien je te le conseille ! Il est ultra simple à configurer et est très efficace, je te laisse te documenter.

Reply

Marsh Posté le 15-12-2010 à 11:33:29    

kenny61 a écrit :

Je ne vais pas t'aider sur squidguard, mais t'apporter mon expérience personnelle !
 
J'ai voulu faire la même chose que toi a savoir squid + squidguard, je n'ai jamais réussi à faire tourner squidguard ! (bon j'ai pas cherché des semaines), et un gars de ma classe m'a conseillé de tester dansguardian, qui à la même fonction.
 
Et bien je te le conseille ! Il est ultra simple à configurer et est très efficace, je te laisse te documenter.


 
Bien le bonjours KENNY61 ? tu es de l'Orne ? comme moi.
Oui je connais DANSGUARDIAN aussi , mais le problème c'est qui n'est pas multisite :s ( à confirmer )

Reply

Marsh Posté le 15-12-2010 à 12:24:47    

Tu ne fais pas appel à squidguard dans ton squid.conf déjà.
 
Il te faut une ligne comme celle-ci :
 

Citation :

redirect_program /usr/bin/squidGuard -c /etc/squid/squidguard.conf


 
Dans ton fichier squidguard.conf, doivent être présente les blacklist à utiliser (Tu peux t'appuyer sur la liste des blacklist de l'Université de Toulouse qui est plutôt bien complète et souvent mise à jour.) ... ex :
 

Citation :


# CONFIG FILE FOR SQUIDGUARD
#
dbhome /var/lib/squidguard/db
logdir /var/log/squidGuard
# ------------------------------------------------------------
# Definition de la base de donnee de filtrage utilisee
# ------------------------------------------------------------
 
destination bl_adult {
        domainlist              blacklists/adult/domains
        urllist         blacklists/adult/urls
}
 
destination bl_liste_blanche {
        domainlist              blacklists/liste_blanche/domains
        urllist         blacklists/liste_blanche/urls
}
 
destination bl_porn {
        domainlist              blacklists/porn/domains
        urllist         blacklists/porn/urls
}
 
 
destination bl_radio {
 domainlist              blacklists/radio/domains
        urllist         blacklists/radio/urls
 
}
 
destination bl_audio-video {
        domainlist              blacklists/audio-video/domains
        urllist         blacklists/audio-video/urls
}
 
 
# ------------------------------------------------------------
# Definition des ACL
# ------------------------------------------------------------
 
acl {
  default {
                pass bl_liste_blanche !bl_porn !bl_adult !bl_audio-video !bl_radio all
                redirect http://monserveur/page_a_afficher_ [...] ocage.html  
  }
}
# ------------------------------------------------------------


 
Dans mon exemple, on laisse passer forcément ce qui est dans la blacklist nommée liste_blanche, et on bloque ce qui est dans porn, audio-vidéo, adult, radio.
 
Une fois ceci fait, régénère les fichiers db de squidGuard avec la commande :
 

Citation :

squidGuard -C all


 
Et recharge ton squid.


Message édité par Neo_t3 le 15-12-2010 à 12:25:38

---------------
Neo_t3 registered Linux user number 354648. | http://www.ondaflow.com
Reply

Marsh Posté le 15-12-2010 à 14:24:58    

merci , je vais tenter sa dans la semaine, je te tiens au courant .  
oui c'est vrai que je fais pas appelle a on squidGuard, car quand je lui fais appelle j'ai une erreur leurs de ma commande service squid  restart :s

Reply

Marsh Posté le 15-12-2010 à 14:48:28    

maximer1664 a écrit :


 
Bien le bonjours KENNY61 ? tu es de l'Orne ? comme moi.
Oui je connais DANSGUARDIAN aussi , mais le problème c'est qui n'est pas multisite :s ( à confirmer )


 
 
Multi-sites ?

Reply

Marsh Posté le 15-12-2010 à 15:44:59    

ShonGail a écrit :


 
 
Multi-sites ?


 
accessible à plusieurs site distinct

Reply

Marsh Posté le 15-12-2010 à 15:44:59   

Reply

Marsh Posté le 17-12-2010 à 16:09:28    

maximer1664 a écrit :


 
accessible à plusieurs site distinct


 
Tu veux dire utilisable par des clients sur des plages IP différentes de celle de l'interface sur lequel il écoute ?

Reply

Marsh Posté le 17-12-2010 à 16:43:51    

ShonGail a écrit :


 
Tu veux dire utilisable par des clients sur des plages IP différentes de celle de l'interface sur lequel il écoute ?


 
 
ouI qui dirige plusieurs centre a l'aide de VPM ?

Reply

Marsh Posté le 17-12-2010 à 19:07:50    

Ouep ! La Ferté ! Et toi ? Tu fais ou tes études ?
 
 
Désolé j'ai vachement en rapport avec le sujet :)

Reply

Marsh Posté le 29-12-2010 à 21:48:13    

De manière générale, quels seraient les arguments que vous mettriez en avant pour "vendre" à votre management ou à ceux qui gèrent les budgets que la mise en place d'un proxy est très utile ?
 
A savoir, les utilisateurs du "business" craignent d'être fliqués.
 
Arguments pour convaincre les managers business :
- mise en cache des pages les plus consultées donc économie de bande passante et amélioration des performances
- blocage des pubs indésirables
- amélioration de la sécurité des postes de travail car rupture de la connexion directe entre Internet et les PCs, toutes les requêtes transiteront par le proxy.
 
Arguments pour convaincre les managers IT :
- nous subissons parfois des chutes du débit de la connexion à internet, et nous suspectons qu'une partie de la bande passante est parfois utilisée à mauvais escient, le proxy permettrait de "monitorer" la chose grâce aux logs
- empêche l'accès aux sites illégaux ou l'usage illégal d'Internet (Peer-to-Peer) si on met en place des règles de filtrage bien choisies
- gratuit, open source SQUID
 
Voyez-vous d'autres arguments ?

Reply

Marsh Posté le 02-01-2011 à 11:32:03    

fookooflakman a écrit :

De manière générale, quels seraient les arguments que vous mettriez en avant pour "vendre" à votre management ou à ceux qui gèrent les budgets que la mise en place d'un proxy est très utile ?
 
A savoir, les utilisateurs du "business" craignent d'être fliqués.
 
Arguments pour convaincre les managers business :
- mise en cache des pages les plus consultées donc économie de bande passante et amélioration des performances
- blocage des pubs indésirables Adblock sous firefox est bien plus efficace...
- amélioration de la sécurité des postes de travail car rupture de la connexion directe entre Internet et les PCs, toutes les requêtes transiteront par le proxy.
Mouai j'aurais plus confiance dans un firewall personnellement...
Arguments pour convaincre les managers IT :
- nous subissons parfois des chutes du débit de la connexion à internet, et nous suspectons qu'une partie de la bande passante est parfois utilisée à mauvais escient, le proxy permettrait de "monitorer" la chose grâce aux logs Installe plutôt Itop sur la paserelle tu verras tout de suite qui dl comme un furieux :D
- empêche l'accès aux sites illégaux ou l'usage illégal d'Internet (Peer-to-Peer) si on met en place des règles de filtrage bien choisies Squidguard n'empêchera pas l'utilisation d'emule (par exemple) !!!
- gratuit, open source SQUID
 
Voyez-vous d'autres arguments ?


Message édité par grao le 02-01-2011 à 11:33:19

---------------
Recherche affiche de GITS Arise 3 et 4, faire offre.
Reply

Marsh Posté le 02-01-2011 à 11:49:40    

Merci pour la réponse.
 

Citation :

- Le blocage des pubs indésirables avec Adblock sous firefox est très probablement bien plus efficace


Mais pour des raisons de compatibilité avec de vieilles applis web, la grande majorité de nos users sont sous IE 6, nous sommes en train d'étudier une migration vers la 8 (c'est IE 7 qui posait problème a l'époque apparemment), mais bon... En tout cas ceux qui ont à la fois IE et Firefox (on a inclus Firefox dans les nouveaux masters) restent plutôt avec IE car ça les gonfle d'ouvrir 2 navigateurs.
 
Quel est le moins coûteux selon toi entre gérer le changement, la communication (sachant qu'on a quasiment que des users "de base" ), et l'installation d'un proxy ?
(c'est une vraie question, je n'ai pas d'avis très tranché)
 

Citation :

- Un firewall est plus secure


Certes, mais on a déjà un firewall.
 

Citation :

- Installation de Itop sur la passerelle pour monitorer la proportion de la bande passante utilisée par les users ?


ok merci pour ça je regarderai
 
D'autres choses ? Je pense que les arguments sont à peu près valables, l'idée est soit :
- d'en trouver des plus percutants
- de les reformuler pour qu'ils soient plus convaincants
- d'en trouver de nouveaux surtout !


Message édité par fookooflakman le 02-01-2011 à 11:56:17
Reply

Marsh Posté le 02-01-2011 à 11:54:55    

Si avec un firewall en place tu as ta connection saturée (et visiblement tu ne sais pas par quoi) c'est qu'il ne filtre pas les bonnes choses...

 

Je vais pas faire un cours d'ITIL ici mais gérer le changement et faire de la com. dessus c'est un peu la même chose. Disons que si tu gères correctement le changement (au sens ITIL) tu te dois d'intégrer la communication de ce changement vers les utilisateurs.
Niveau com. ça va être rapide: comme le proxy, techniquement ça ne change rien pour eux, il suffit de leur dire que si certains sites ne passent plus c'est normal :D
L'installation de Squid niveau coût c'est quoi ? 1/2 journée homme ?

Message cité 1 fois
Message édité par grao le 02-01-2011 à 11:55:36

---------------
Recherche affiche de GITS Arise 3 et 4, faire offre.
Reply

Marsh Posté le 02-01-2011 à 12:16:40    

grao a écrit :

Je vais pas faire un cours d'ITIL ici mais gérer le changement et faire de la com. dessus c'est un peu la même chose. Disons que si tu gères correctement le changement (au sens ITIL) tu te dois d'intégrer la communication de ce changement vers les utilisateurs.
Niveau com. ça va être rapide: comme le proxy, techniquement ça ne change rien pour eux, il suffit de leur dire que si certains sites ne passent plus c'est normal :D
L'installation de Squid niveau coût c'est quoi ? 1/2 journée homme ?


On s'est mal compris (mais ta remarque sur ITIL est juste). Je ne parle pas d'une communication potentielle sur la mise en place d'un proxy mais plutôt de l'accompagnement à effectuer (communication, formation) pour expliquer qu'il existe un autre navigateur, qu'il s'appelle Mozilla, que tu peux naviguer avec plusieurs onglets etc... Ca, c'est dans le cas où on déploierait Mozilla sur tous les postes (comme tu suggères adblock).
Du point de vue "install de proxy", pour eux ça serait transparent car dans un 1er temps je pense qu'on ne filtrerait rien en terme d'URL : trop d'utilisateurs VIP ont trop l'habitude d'une souplesse extrême, et on ne peut pas se permettre d'aller "au clash" avec ce genre d'utilisateurs.
 
Cela étant, je retiens que la solution "proxy" semble moins coûteuse. :jap: Et c'est donc pour ça que maintenant je cherche à "vendre" cette solution aux managers, on en revient donc au point de départ.
 

grao a écrit :

Si avec un firewall en place tu as ta connection saturée (et visiblement tu ne sais pas par quoi) c'est qu'il ne filtre pas les bonnes choses...


Concernant ta remarque sur le firewall, ce n'est absolument pas mon domaine de compétence, mais je ne suis pas convaincu pour autant. D'un point de vue technique, à quelles restrictions de ports pense-tu ?

Message cité 1 fois
Message édité par fookooflakman le 02-01-2011 à 12:18:24
Reply

Marsh Posté le 02-01-2011 à 13:39:14    

En général si tu mets en place un proxy à la place d'un accès direct au net ça veut dire :
- tu firewall l'accès client vers l'extérieur sur les ports que tu souhaites prendre en charge par le proxy. Ils n'ont donc plus d'accès direct possible
- tu l'autorises que depuis le serveur proxy
 
Alors même si le proxy est transparent pour les utilisateurs il peut y avoir des effets secondaires non prévus sur des applis voulant sortir mais donc la conf du proxy n'est pas faite ou d'autres petites joyeuseries du genre.

Reply

Marsh Posté le 02-01-2011 à 13:53:19    

Certes, on part du principe que l'installation du proxy sera correcte, qu'on a analysé les impacts, qu'on sait quels sont les ports à ouvrir.
 
Et sinon en terme d'arguments à sortir pour "vendre" l'idée du proxy sur notre infra, des idées ?

Reply

Marsh Posté le 02-01-2011 à 13:56:11    

fookooflakman a écrit :


On s'est mal compris (mais ta remarque sur ITIL est juste). Je ne parle pas d'une communication potentielle sur la mise en place d'un proxy mais plutôt de l'accompagnement à effectuer (communication, formation) pour expliquer qu'il existe un autre navigateur, qu'il s'appelle Mozilla, que tu peux naviguer avec plusieurs onglets etc... Ca, c'est dans le cas où on déploierait Mozilla sur tous les postes (comme tu suggères adblock).
Du point de vue "install de proxy", pour eux ça serait transparent car dans un 1er temps je pense qu'on ne filtrerait rien en terme d'URL : trop d'utilisateurs VIP ont trop l'habitude d'une souplesse extrême, et on ne peut pas se permettre d'aller "au clash" avec ce genre d'utilisateurs.
 
Cela étant, je retiens que la solution "proxy" semble moins coûteuse. :jap: Et c'est donc pour ça que maintenant je cherche à "vendre" cette solution aux managers, on en revient donc au point de départ.
 


 

fookooflakman a écrit :


Concernant ta remarque sur le firewall, ce n'est absolument pas mon domaine de compétence, mais je ne suis pas convaincu pour autant. D'un point de vue technique, à quelles restrictions de ports pense-tu ?


Pour les VIP tu peux déclarer explicitement dans Squidguard des IP à ne pas filtrer.
Sinon bloquer tous les ports "commun" des appli. de bittrorent c'est un bon début. Faire un monitoring du nombre de connections par machine: si user lambda utilise plus de 200 connections simultanée c'est rarement pour wikipedia :D


---------------
Recherche affiche de GITS Arise 3 et 4, faire offre.
Reply

Marsh Posté le 02-01-2011 à 14:00:44    

Bah tu as un peu tout dis.
- mise en cache
- rupture protocolaire
- audit de l'usage du net
- contrôle de ce qui est envoyé sur le net par les utilisateurs

Reply

Marsh Posté le 02-01-2011 à 14:00:53    

grao a écrit :


Pour les VIP tu peux déclarer explicitement dans Squidguard des IP à ne pas filtrer.
Sinon bloquer tous les ports "commun" des appli. de bittrorent c'est un bon début. Faire un monitoring du nombre de connections par machine: si user lambda utilise plus de 200 connections simultanée c'est rarement pour wikipedia :D


 
Nos VIP n'ont pas d'adresse IP fixe et n'ont pas de plage de sous-réseau réservée. Sinon je doute quand même vu le standing de la société qu'il y en ait qui s'amuse à ça, mais clairement la mise en place du proxy est aussi une façon de s'obliger à contrôler ce point.
 
@ Je@nb : ok je vais essayer de leur sortir une petite étude sur le sujet sur la base de ces arguments, merci

Message cité 1 fois
Message édité par fookooflakman le 02-01-2011 à 14:05:25
Reply

Marsh Posté le 02-01-2011 à 14:57:41    

fookooflakman a écrit :


 
Nos VIP n'ont pas d'adresse IP fixe et n'ont pas de plage de sous-réseau réservée[1]. Sinon je doute quand même vu le standing de la société qu'il y en ait qui s'amuse à ça[2], mais clairement la mise en place du proxy est aussi une façon de s'obliger à contrôler ce point.
 
@ Je@nb : ok je vais essayer de leur sortir une petite étude sur le sujet sur la base de ces arguments, merci


1)Adresse mac?
2)Le standing n'a rien à voir :D J'ai déja vu dans des sociétés où l'on ne le soupçonnerait pas le moins du monde  :whistle:

Message cité 1 fois
Message édité par grao le 02-01-2011 à 14:59:00

---------------
Recherche affiche de GITS Arise 3 et 4, faire offre.
Reply

Marsh Posté le 02-01-2011 à 15:25:48    

grao a écrit :


1)Adresse mac?
2)Le standing n'a rien à voir :D J'ai déja vu dans des sociétés où l'on ne le soupçonnerait pas le moins du monde  :whistle:


 
1) éventuellement, mais ça sera à étudier au moment de la mise en place si elle est validée.
2) Déjà si un peu quand même, ensuite il faudrait qu'ils aient les droits admin sur leur postes pour installer le nécessaire ce qui n'est majoritaireent pas le cas, et vu le type d'utilisateurs qu'on a, je t'assure que c'est difficilement concevable (gros noobs, normal vu l'âge moyen). Mais on ne va pas lancer un débat sur ce sujet, il ne représente aucun intérêt.

Reply

Marsh Posté le 02-01-2011 à 15:40:01    

Il n'y a pas que le P2P: le DDL peut faire très mal en bande passante il ne nécessite rien d'autre qu'un navigateur.


---------------
Recherche affiche de GITS Arise 3 et 4, faire offre.
Reply

Marsh Posté le    

Reply

Sujets relatifs:

Leave a Replay

Make sure you enter the(*)required information where indicate.HTML code is not allowed