Table of Contents
- Starting SpiderFoot
- API Keys
- Using SpiderFoot
SpiderFoot is an open source intelligence automation tool. Its goal is to automate the process of gathering intelligence about a given target, which may be an IP address, domain name, hostname or network subnet.
SpiderFoot can be used offensively, i.e. as part of a black-box penetration test to gather information about the target or defensively to identify what information your organisation is freely providing for attackers to use against you.
SpiderFoot is written in Python (2.7), so to run on Linux/Solaris/FreeBSD/etc. you need Python 2.7 installed, in addition to the lxml, netaddr, M2Crypto, CherryPy, bs4, requests and Mako modules.
To install the dependencies using PIP, run the following:
~$ pip install lxml netaddr M2Crypto cherrypy mako requests bs4
On some distros, instead of M2Crypto, you must install it using APT instead:
~$ apt-get install python-m2crypto
Other modules such as PyPDF2, SOCKS and more are included in the SpiderFoot package, so you don’t need to install them separately.
SpiderFoot for Windows is a compiled executable file, and so all dependencies are packaged with it.
No third party tools/libraries need to be installed, not even Python.
Installing SpiderFoot is literally as simple as unpacking the distribution tar.gz/zip file.
To install SpiderFoot on Linux/Solaris/FreeBSD/etc. you only need to un-targz the package, as follows:
~$ tar zxvf spiderfoot-X.X.X-src.tar.gz ~$ cd spiderfoot-X.X.X ~/spiderfoot-X.X.X$
Unzip the distribution ZIP file to a folder of your choice… yep that’s it.
To run SpiderFoot, simply execute
sf.py from the directory you extracted SpiderFoot into:
~/spiderfoot-X.X.X$ python ./sf.py
Once executed, a web-server will be started, which by default will listen on 127.0.0.1:5001. You can then use the web-browser of your choice by browsing to http://127.0.0.1:5001.
If you wish to make SpiderFoot accessible from another system, for example running it on a server and controlling it remotely, then you can specify an external IP for SpiderFoot to bind to, or use 0.0.0.0 so that it binds to all addresses, including 127.0.0.1:
~/spiderfoot-X.X.X$ python ./sf.py 0.0.0.0:5001
If port 5001 is used by another application on your system, you can change the port:
~/spiderfoot-X.X.X$ python ./sf.py 127.0.0.1:9999
SpiderFoot for Windows comes as a pre-packaged executable, with no need to install any dependencies.
For now, there is no installer wizard, so all that’s needed is to unzip the package into a directory (e.g. C:\SpiderFoot) and run
As with Linux, you can also specify the IP and port to bind to:
By default, SpiderFoot does not authenticate users connecting to its user-interface or serve over HTTPS, so avoid running it on a server/workstation that can be accessed from untrusted devices, as they will be able to control SpiderFoot remotely and initiate scans from your devices. As of SpiderFoot 2.7, to use authentication and HTTPS, see the Security section below.
With version 2.7, SpiderFoot introduced authentication as well as TLS/SSL support. These are automatic based on the presence of specific files.
SpiderFoot will require basic digest authentication if a file named
passwd exists in the SpiderFoot root directory. The format of the file is simple - just create an entry per account, in the format of:
Once the file is created, restart SpiderFoot.
SpiderFoot will serve HTTPS (and only that) if it detects the existence of a public certificate and key file in SpiderFoot’s root directory. This means whatever port you set SpiderFoot to listen on is the port TLS/SSL will be used. It is not possible for SpiderFoot to serve both HTTP and HTTPS simultaneously on different ports. If you need to do that, an nginx proxy in front of SpiderFoot would be a better solution.
Simply place two files in the SpiderFoot directory -
spiderfoot.crt (RSA public key in PEM format) and
spiderfoot.key (RSA private key in PEM format). Restart SpiderFoot and you will now be serving HTTPS only.
For instructions on generating a self-signed certificate, check out this StackOverflow article.
A few SpiderFoot modules require or perform better when API keys are supplied.
- Go to http://www.projecthoneypot.org
- Sign up (free) and log in
- Click Services -> HTTP Blacklist
- An API key should be listed
- Copy and paste that key into the Settings -> Honeypot Checker section in SpiderFoot
- Go to http://www.shodanhq.com
- Sign up (free) and log in
- Click ‘Developer Center’
- On the far right your API key should appear in a box
- Copy and paste that key into the Settings -> SHODAN section in SpiderFoot
- Go to http://www.virustotal.com
- Sign up (free) and log in
- Click your username in the far right and select ‘My API Key’
- Copy and paste the key in the grey box into the Settings -> VirusTotal section in SpiderFoot
IBM X-Force Exchange
- Go to https://exchange.xforce.ibmcloud.com/new
- Create an IBM ID (free) and log in
- Go to your account settings
- Click API Access
- Generate the API key and password (you need both)
- Copy and paste the key and password into the Settings -> X-Force section in SpiderFoot
- Go to http://www.malwarepatrol.net
- Create an account (free) and log in
- Click “Open Source” and scroll down to the bottom
- Click the “Free” link in the subscription pricing table
- Click the free block lists link
- You will receive a receipt ID
- Copy and paste the receipt ID into the Settings -> MalwarePatrol section in SpiderFoot
- Go to http://www.botscout.com
- Create an account (free) and log in
- Under Account Info, your API key will be there
- Copy and paste the API key into the Settings -> BotScout section in SpiderFoot
- Go to http://www.cymon.io
- Create an account (free) and log in
- Under “My API Dashboard”, your API key will be there
- Copy and paste the API key into the Settings -> Cymon section in SpiderFoot
- Go to http://www.censys.io
- Create an account (free) and log in
- Click “My Account” (bottom right)
- Copy and paste the API Credentials values into the Settings -> Censys section in SpiderFoot
- Go to http://www.hunter.io
- Create an account (free) and log in
- Click “API” in the top menu-base
- Copy and paste the API key into the Settings -> Hunter.io section in SpiderFoot
Running a Scan
When you run SpiderFoot for the first time, there is no historical data, so you should be presented with a screen like the following:
To initiate a scan, click on the ‘New Scan’ button in the top menu bar. You will then need to define a name for your scan (these are non-unique) and a target (also non-unique):
You can then define how you would like to run the scan - either by use case (the tab selected by default), by data required or by module.
Module-based scanning is for more advanced users who are familiar with the behavior and data provided by different modules, and want more control over the scan:
Beware though, there is no dependency checking when scanning by module, only for scanning by required data. This means that if you select a module that depends on event types only provided by other modules, but those modules are not selected, you will get no results.
From the moment you click ‘Run Scan’, you will be taken to a screen for monitoring your scan in near real time:
That screen is made up of a graph showing a break down of the data obtained so far plus log messages generated by SpiderFoot and its modules.
The bars of the graph are clickable, taking you to the result table for that particular data type.
By clicking on the ‘Browse’ button for a scan, you can browse the data by type:
This data is exportable and searchable. Click the Search box to get a pop-up explaining how to perform searches.
By clicking on one of the data types, you will be presented with the actual data:
The fields displayed are explained as follows:
- Checkbox field: Use this to set/unset fields as false positive. Once at least one is checked, click the orange False Positive button above to set/unset the record.
- Data Element: The data the module was able to obtain about your target.
- Source Data Element: The data the module received as the basis for its data colletion. In the example above, the sfp_portscan_tcp module received an event about an open port, and used that to obtain the banner on that port.
- Source Module: The module that identified this data.
- Identified: When the data was identified by the module.
You can click the black icons to modify how this data is represented. For instance you can get a unique data representation by clicking the Unique Data View icon:
Setting False Positives
Version 2.6.0 introduced the ability to set data records as false positive. As indicated in the previous section, use the checkbox and the orange button to set/unset records as false positive:
Once you have set records as false positive, you will see an indicator next to those records, and have the ability to filter them from view, as shown below:
NOTE: Records can only be set to false positive once a scan has finished running. This is because setting a record to false positive also results in all child data elements being set to false positive. This obviously cannot be done if the scan is still running and can thus lead to an inconsistent state in the back-end. The UI will prevent you from doing so.
The result of a record being set to false positive, aside from the indicator in the data table view and exports, is that such data will not be shown in the node graphs.
Results can be searched either at the whole scan level, or within individual data types. The scope of the search is determined by the screen you are on at the time.
As indicated by the pop-up box when selecting the search field, you can search as follows:
- Exact value: Non-wildcard searching for a specific value. For example, search for 404 within the HTTP Status Code section to see all pages that were not found.
- Pattern matching: Search for simple wildcards to find patterns. For example, search for *:22 within the Open TCP Port section to see all instances of port 22 open.
- Regular expression searches: Encapsulate your string in ‘/’ to search by regular expression. For example, search for ‘/\d+.\d+.\d+.\d+/’ to find anything looking like an IP address in your scan results.
When you have some historical scan data accumulated, you can use the list available on the ‘Scans’ section to manage them:
You can filter the scans shown by altering the Filter drop-down selection. Except for the green refresh icon, all icons on the right will all apply to whichever scans you have checked the checkboxes for.
Refer to this post for more information.
SpiderFoot has all data collection modularised. When a module discovers a piece of data, that data is transmitted to all other modules that are ‘interested’ in that data type for processing. Those modules will then act on that piece of data to identify new data, and in turn generate new events for other modules which may be interested, and so on.
sfp_dns may identify an IP address associated with your target, notifying all interested modules. One of those interested modules would be the
sfp_ir module, which will take that IP address and identify the netblock it is a part of, the BGP ASN and so on.
This might be best illustrated by looking at module code. For example, the
sfp_names module looks for TARGET_WEB_CONTENT and EMAILADDR events for identifying human names:
# What events is this module interested in for input # * = be notified about all events. def watchedEvents(self): return ["TARGET_WEB_CONTENT", "EMAILADDR"] # What events this module produces # This is to support the end user in selecting modules based on events # produced. def producedEvents(self): return ["HUMAN_NAME"]
Meanwhile, as each event is generated to a module, it is also recorded in the SpiderFoot database for reporting and viewing in the UI.
The below table is an up-to-date list of all SpiderFoot modules and a short summary of their capabilities.
|sfp_accounts.py||Accounts||Look for possible associated accounts on nearly 200 websites like Ebay, Slashdot, reddit, etc.|
|sfp_adblock.py||AdBlock Check||Check if linked pages would be blocked by AdBlock Plus.|
|sfp_base64.py||Base64||Identify Base64-encoded strings in any content and URLs, often revealing interesting hidden information.|
|sfp_bingsearch.py||Bing||Some light Bing scraping to identify sub-domains and links.|
|sfp_binstring.py||Binary String Extractor||Attempt to identify strings in binary content.|
|sfp_blacklist.py||Blacklist||Query various blacklist database for open relays, open proxies, vulnerable servers, etc.|
|sfp_botscout.py||BotScout||Searches botscout.com’s database of spam-bot IPs and e-mail addresses.|
|sfp_censys.py||Censys||Obtain information from Censys.io|
|sfp_coderepo.py||Code Repos||Identify associated public code repositories (Github only for now).|
|sfp_cookie.py||Cookies||Extract Cookies from HTTP headers.|
|sfp_crossref.py||Cross-Reference||Identify whether other domains are associated (‘Affiliates’) of the target.|
|sfp_cymon.py||Cymon||Obtain information from Cymon.io|
|sfp_darksearch.py||Darknet||Search Tor ‘Onion City’ search engine for mentions of the target domain.|
|sfp_defaced.py||Defacement Check||Check if a hostname/domain appears on the zone-h.org ‘special defacements’ RSS feed/|
|sfp_dns.py||DNS||Performs a number of DNS checks to obtain Sub-domains/Hostnames, IP Addresses and Affiliates.|
|sfp_duckduckgo.py||DuckDuckGo||Query DuckDuckGo’s API for descriptive information about your target.|
|sfp_email.py||Identify e-mail addresses in any obtained data.|
|sfp_errors.py||Errors||Identify common error messages in content like SQL errors, etc.|
|sfp_filemeta.py||File Metadata||Extracts meta data from documents and images.|
|sfp_geoip.py||GeoIP||Identifies the physical location of IP addresses identified.|
|sfp_googlesearch.py||Some light Google scraping to identify sub-domains and links.|
|sfp_historic.py||Historic Files||Identifies historic versions of interesting files/pages from the Wayback Machine.|
|sfp_honeypot.py||Honeypot Checker||Query the projecthoneypot.org database for entries.|
|sfp_hosting.py||Hosting Providers||Find out if any IP addresses identified fall within known 3rd party hosting ranges, e.g. Amazon,|
|sfp_hunter.py||Hunter.io||Check for e-mail addresses and names on hunter.io.|
|sfp_intfiles.py||Interesting Files||Identifies potential files of interest, e.g. office documents, zip files.|
|sfp_ir.py||Internet Registries||Queries Internet Registries to identify netblocks and other info.|
|sfp_junkfiles.py||Junk Files||Looks for old/temporary and other similar files.|
|sfp_malcheck.py||Malicious Check||Check if a website, IP or ASN is considered malicious by various sources. Includes TOR exit nodes|
|sfp_malwarepatrol.py||MalwarePatrol||Searches malwarepatrol.net’s database of malicious URLs/IPs.|
|sfp_names.py||Name Extractor||Attempt to identify human names in fetched content.|
|sfp_pageinfo.py||Page Info||Obtain information about web pages (do they take passwords, do they contain forms,|
|sfp_pastes.py||Pastes||PasteBin, Pastie and Notepad.cc scraping (via Google) to identify related content.|
|sfp_pgp.py||PGP Key Look-up||Look up e-mail addresses in PGP public key servers.|
|sfp_phone.py||Phone Numbers||Identify phone numbers in scraped webpages.|
|sfp_portscan_tcp.py||Port Scanner - TCP||Scans for commonly open TCP ports on Internet-facing systems.|
|sfp_psbdmp.py||Psbdmp.com||Check psbdmp.com (PasteBin Dump) for potentially hacked e-mails and domains.|
|sfp_pwned.py||Pwned Password||Check Have I Been Pwned? for hacked e-mail addresses identified.|
|sfp_s3bucket.py||S3 Bucket Finder||Search for potential S3 buckets associated with the target.|
|sfp_sharedip.py||Shared IP||Search Bing and/or Robtex.com and/or HackerTarget.com for hosts sharing the same IP.|
|sfp_shodan.py||SHODAN||Obtain information from SHODAN about identified IP addresses.|
|sfp_similar.py||Similar Domains||Search various sources to identify similar looking domain names, for instance squatted domains.|
|sfp_social.py||Social Networks||Identify presence on social media networks such as LinkedIn, Twitter and others.|
|sfp_socialprofiles.py||Social Media Profiles||Identify the social media profiles for human names identified.|
|sfp_spider.py||Spider||Spidering of web-pages to extract content for searching.|
|sfp_sslcert.py||SSL||Gather information about SSL certificates used by the target’s HTTPS sites.|
|sfp_strangeheaders.py||Strange Headers||Obtain non-standard HTTP headers returned by web servers.|
|sfp_threatcrowd.py||ThreatCrowd||Obtain information from ThreatCrowd about identified IP addresses, domains and e-mail addresses.|
|sfp_tldsearch.py||TLD Search||Search all Internet TLDs for domains with the same name as the target (this can be very slow.)|
|sfp_virustotal.py||VirusTotal||Obtain information from VirusTotal about identified IP addresses.|
|sfp_vuln.py||Vulnerable||Check external vulnerability scanning/reporting services (for now only openbugbounty.org) to see if the tar|
|sfp_webframework.py||Web Framework||Identify the usage of popular web frameworks like jQuery, YUI and others.|
|sfp_websvr.py||Web Server||Obtain web server banners to identify versions of web servers being used.|
|sfp_whois.py||Whois||Perform a WHOIS look-up on domain names and owned netblocks.|
|sfp_wikileaks.py||Wikileaks||Search Wikileaks for mentions of domain names and e-mail addresses.|
|sfp_xforce.py||XForce Exchange||Obtain information from IBM X-Force Exchange|
|sfp_yahoosearch.py||Yahoo||Some light Yahoo scraping to identify sub-domains and links.|
As mentioned above, SpiderFoot works on an “event-driven” module, whereby each module generates events about data elements which other modules listen to and consume.
The data elements are one of the following types:
- entities like IP addresses, Internet names (hostnames, sub-domains, domains),
- sub-entities like port numbers, URLs and software installed,
- descriptors of those entities (malicious, physical location information, …) or
- data which is mostly unstructured data (web page content, port banners, raw DNS records, …)
Here are all the available data elements built into SpiderFoot:
|Element ID||Element Name||Element Data Type|
|ACCOUNT_EXTERNAL_OWNED||Account on External Site||ENTITY|
|ACCOUNT_EXTERNAL_OWNED_COMPROMISED||Hacked Account on External Site||DESCRIPTOR|
|ACCOUNT_EXTERNAL_USER_SHARED||User Account on External Site||ENTITY|
|ACCOUNT_EXTERNAL_USER_SHARED_COMPROMISED||Hacked User Account on External Site||DESCRIPTOR|
|AFFILIATE_INTERNET_NAME||Affiliate - Internet Name||ENTITY|
|AFFILIATE_IPADDR||Affiliate - IP Address||ENTITY|
|AFFILIATE_WEB_CONTENT||Affiliate - Web Content||DATA|
|AFFILIATE_DESCRIPTION_CATEGORY||Affiliate Description - Category||DESCRIPTOR|
|AFFILIATE_DESCRIPTION_ABSTRACT||Affiliate Description - Abstract||DESCRIPTOR|
|APPSTORE_ENTRY||App Store Entry||ENTITY|
|AMAZON_S3_BUCKET||Amazon S3 Bucket||ENTITY|
|BGP_AS_OWNER||BGP AS Ownership||ENTITY|
|BGP_AS_MEMBER||BGP AS Membership||ENTITY|
|BGP_AS_PEER||BGP AS Peer||ENTITY|
|BLACKLISTED_IPADDR||Blacklisted IP Address||DESCRIPTOR|
|BLACKLISTED_AFFILIATE_IPADDR||Blacklisted Affiliate IP Address||DESCRIPTOR|
|BLACKLISTED_SUBNET||Blacklisted IP on Same Subnet||DESCRIPTOR|
|BLACKLISTED_NETBLOCK||Blacklisted IP on Owned Netblock||DESCRIPTOR|
|DARKNET_MENTION_URL||Darknet Mention URL||DESCRIPTOR|
|DARKNET_MENTION_CONTENT||Darknet Mention Web Content||DATA|
|DEFACED_IPADDR||Defaced IP Address||DESCRIPTOR|
|DEFACED_COHOST||Defaced Co-Hosted Site||DESCRIPTOR|
|DEFACED_AFFILIATE_IPADDR||Defaced Affiliate IP Address||DESCRIPTOR|
|DESCRIPTION_CATEGORY||Description - Category||DESCRIPTOR|
|DESCRIPTION_ABSTRACT||Description - Abstract||DESCRIPTOR|
|DNS_TEXT||DNS TXT Record||DATA|
|DOMAIN_NAME_PARENT||Domain Name (Parent)||ENTITY|
|EMAILADDR_COMPROMISED||Hacked Email Address||DESCRIPTOR|
|HTTP_CODE||HTTP Status Code||DATA|
|INTERESTING_FILE_HISTORIC||Historic Interesting File||DESCRIPTOR|
|LINKED_URL_INTERNAL||Linked URL - Internal||SUBENTITY|
|LINKED_URL_EXTERNAL||Linked URL - External||SUBENTITY|
|MALICIOUS_IPADDR||Malicious IP Address||DESCRIPTOR|
|MALICIOUS_COHOST||Malicious Co-Hosted Site||DESCRIPTOR|
|MALICIOUS_EMAILADDR||Malicious E-mail Address||DESCRIPTOR|
|MALICIOUS_INTERNET_NAME||Malicious Internet Name||DESCRIPTOR|
|MALICIOUS_AFFILIATE_IPADDR||Malicious Affiliate IP Address||DESCRIPTOR|
|MALICIOUS_NETBLOCK||Malicious IP on Owned Netblock||DESCRIPTOR|
|MALICIOUS_SUBNET||Malicious IP on Same Subnet||DESCRIPTOR|
|LEAKSITE_URL||Leak Site URL||ENTITY|
|LEAKSITE_CONTENT||Leak Site Content||DATA|
|PGP_KEY||PGP Public Key||DATA|
|PROVIDER_DNS||Name Server (DNS NS Records)||ENTITY|
|PROVIDER_MAIL||Email Gateway (DNS MX Records)||ENTITY|
|PUBLIC_CODE_REPO||Public Code Repository||ENTITY|
|RAW_RIR_DATA||Raw Data from RIRs||DATA|
|RAW_DNS_RECORDS||Raw DNS Records||DATA|
|RAW_FILE_META_DATA||Raw File Meta Data||DATA|
|SEARCH_ENGINE_WEB_CONTENT||Search Engines Web Content||DATA|
|SOCIAL_MEDIA||Social Media Presence||ENTITY|
|SSL_CERTIFICATE_RAW||SSL Certificate - Raw Data||DATA|
|SSL_CERTIFICATE_ISSUED||SSL Certificate - Issued to||ENTITY|
|SSL_CERTIFICATE_ISSUER||SSL Certificate - Issued by||ENTITY|
|SSL_CERTIFICATE_MISMATCH||SSL Certificate Host Mismatch||DESCRIPTOR|
|SSL_CERTIFICATE_EXPIRED||SSL Certificate Expired||DESCRIPTOR|
|SSL_CERTIFICATE_EXPIRING||SSL Certificate Expiring||DESCRIPTOR|
|TCP_PORT_OPEN||Open TCP Port||SUBENTITY|
|TCP_PORT_OPEN_BANNER||Open TCP Port Banner||DATA|
|UDP_PORT_OPEN||Open UDP Port||SUBENTITY|
|UDP_PORT_OPEN_INFO||Open UDP Port Information||DATA|
|URL_ADBLOCKED_EXTERNAL||URL (AdBlocked External)||DESCRIPTOR|
|URL_ADBLOCKED_INTERNAL||URL (AdBlocked Internal)||DESCRIPTOR|
|URL_FLASH||URL (Uses Flash)||DESCRIPTOR|
|URL_WEB_FRAMEWORK||URL (Uses a Web Framework)||DESCRIPTOR|
|URL_JAVA_APPLET||URL (Uses Java Applet)||DESCRIPTOR|
|URL_STATIC||URL (Purely Static)||DESCRIPTOR|
|URL_PASSWORD||URL (Accepts Passwords)||DESCRIPTOR|
|URL_UPLOAD||URL (Accepts Uploads)||DESCRIPTOR|
|URL_FORM_HISTORIC||Historic URL (Form)||DESCRIPTOR|
|URL_FLASH_HISTORIC||Historic URL (Uses Flash)||DESCRIPTOR|
|URL_WEB_FRAMEWORK_HISTORIC||Historic URL (Uses a Web Framework)||DESCRIPTOR|
|URL_JAVA_APPLET_HISTORIC||Historic URL (Uses Java Applet)||DESCRIPTOR|
|URL_STATIC_HISTORIC||Historic URL (Purely Static)||DESCRIPTOR|
|URL_PASSWORD_HISTORIC||Historic URL (Accepts Passwords)||DESCRIPTOR|
|URL_UPLOAD_HISTORIC||Historic URL (Accepts Uploads)||DESCRIPTOR|
|VULNERABILITY||Vulnerability in Public Domain||DESCRIPTOR|
|WEBSERVER_STRANGEHEADER||Non-Standard HTTP Header||DATA|
Writing a Module
To write a SpiderFoot module, start by looking at the
sfp_template.py file which is a skeleton module that does nothing. Use the following steps as your guide:
- Create a copy of
sfp_template.pyto whatever your module will be named. Try and make this something descriptive, i.e. not something like
sfp_mymodule.pybut instead something like
sfp_imageanalyser.pyif you were creating a module to analyse image content.
- Replace XXX in the new module with the name of your module and update the descriptive information in the header and comment within the module.
- The comment for the class (check in
sfp_template.py) is used by SpiderFoot in the UI to correctly categorise modules, so make it something meaningful. Look at other modules for examples.
- Set the events in
producedEvents()accordingly, based on the data element table in the previous section. If you are producing a new data element not pre-existing in SpiderFoot, you must create this in the database:
~/spiderfoot-X.X.X$ sqlite3 spiderfoot.db sqlite> INSERT INTO tbl_event_types (event, event_descr, event_raw) VALUES ('NEW_DATA_ELEMENT_TYPE_NAME_HERE', 'Description of your New Data Element Here', 0, 'DESCRIPTOR or DATA or ENTITY or SUBENTITY');`
- Put the logic for the module in
handleEvent(). Each call to
handleEvent()is provided a
SpiderFootEventobject. The most important values within this object are:
eventType: The data element ID (
data: The actual data, e.g. the IP address or web server banner, etc.
module: The name of the module that produced the event (
- When it is time to generate your event, create an instance of
e = SpiderFootEvent("IP_ADDRESS", ipaddr, self.__name__, event)
- Note: the
eventpassed as the last variable is the event that your module received. This is what builds a relationship between data elements in the SpiderFoot database.
- Notify all modules that may be interested in the event:
All SpiderFoot data is stored in a SQLite database (
spiderfoot.db in your SpiderFoot installation folder) which can be used outside of SpiderFoot for analysis of your data.
The schema is quite simple and can be viewed in the GitHub repo.
The below queries might provide some further clues:
# Total number of scans in the SpiderFoot database sqlite> select count(*) from tbl_scan_instance; 10
# Obtain the ID for a particular scan sqlite> select guid from tbl_scan_instance where seed_target = 'binarypool.com'; b459e339523b8d06235bd06087ae6c6017aaf4ed68dccea0b65a1999a17e460a