Tinjauan Perkembangan Kecerdasan Buatan Berbasis Arsitektur Transformer

Main Article Content

Bayu Firmanto
As'ad Shidqy Aziz
Jendra Sesoca

Abstract

Artificial Intelligence, especially technique utilizing machine learning using transformer architecture has experienced rapid progress. The transformer architecture was first introduced in 2017 and laid the foundation for the development of larger and more accurate models in NLP, some of which use BERT and GPT. This review examines five studies that have made significant contributions to the development of the transformer architecture, including research by Vaswani, Devlin, Brown, and Dai. The results of this study shows that the transformer architecture is capable of improving training efficiency, accuracy, and long-context understanding in various NLP tasks. However, there are still some issues with this technology that need to be addressed further.

Article Details

How to Cite
[1]
B. Firmanto, As'ad Shidqy Aziz, and Jendra Sesoca, “Tinjauan Perkembangan Kecerdasan Buatan Berbasis Arsitektur Transformer ”, SMARTICS, vol. 10, no. 1, pp. 33–38, Sep. 2024.
Section
Article

Most read articles by the same author(s)