Nepal Rastra Bank has identified the risks of financial crimes arising from the use of artificial intelligence (AI) and new technologies as a challenge. The Financial Intelligence Unit under the central bank has updated its guidelines on suspicious transaction and activity reporting to find ways to mitigate such risks. A separate category of indicators related to the use of AI and new technologies has been added to the guidelines. This will alert financial institutions in reporting on how criminals are using technology to commit money laundering, terrorist financing and fraud.
The Nepal Rastra Bank (NRB) controls money laundering through its Financial Intelligence Unit (FIU) and Money Laundering Prevention Supervision Division, which enforce regulations by requiring financial institutions to implement Know Your Customer (KYC) and Customer Due Diligence (CDD) procedures, monitor transactions for suspicious activity, and report findings to the FIU for analysis and action. The NRB issues directives, conducts on-site and off-site inspections, and can impose penalties for non-compliance.
The guidelines highlight digital identity theft as a misuse of AI and new technologies. According to which, there is an increased risk of criminals using digital manipulation or deepfake content in customer identity verification (KYC) documents. In addition, they can use various tools (e.g. VPN, spoof IMEI number) to hide their credentials when opening an account or making transactions.
Similarly, automated and bot-based transactions are also included. According to which, the behavior of transactions that appear bot-like (e.g., frequency and transactions that do not seem natural at a certain interval or across multiple accounts) is considered an indicator of money laundering. Such activities indicate automated coordination.
Similarly, customers using AI-based apps, Telegram bots or APIs for financial services are also considered suspicious. Especially when they are linked to gambling, betting or crypto arbitrage, such transactions should be monitored.
AI-generated transactions such as fake loan offers and voice synthesis are also considered risky. The sudden involvement of people with no technical or investment background in complex AI-enabled platforms (e.g. algorithmic trading, token minting) and the large volume of micropayments may also be related to AI-based pyramid schemes or gaming.
Similarly, transactions made through unregistered or unregulated crowd-funding portals or peer-to-peer lending platforms may also be suspicious and need to be monitored. The guideline also states that attention should be paid to other technical and digital means other than AI used in financial crimes.
In which, the challenge of detecting red flags in high frequency and high volume transactions of digital financial services has been pointed out. In particular, creating wallet accounts under different names using the same mobile number or using accounts linked to online gambling/betting sites (e.g. Onexbet, Metabet, Fifi) is considered suspicious.
Virtual assets have also been identified as a major risk for money laundering. Caution should be exercised if the word 'crypto' is used in the transaction details, funds are received from P2P exchanges, or repeated account access attempts are made from unknown addresses.
Similarly, in cyber-focused fraud, unusual activity on accounts linked to online gambling or illegal virtual asset transactions, digital transactions on accounts of uneducated individuals, and the user's IP address or GPS coordinates are found to be from another region will be viewed as suspicious even if they are found to be from another region. Examples of this include transactions made using a VPN.